S

Shiyu Wang

Total Citations
138
h-index
3
Papers
1

Publications

#1 2602.18449v1 Jan 30, 2026

Prompt Optimization Via Diffusion Language Models

We propose a diffusion-based framework for prompt optimization that leverages Diffusion Language Models (DLMs) to iteratively refine system prompts through masked denoising. By conditioning on interaction traces, including user queries, model responses, and optional feedback, our method enables flexible, span-level prompt updates without requiring gradient access or modifying the downstream language model. Across diverse benchmarks (e.g., $τ$-bench, SST-2, SST-5), DLM-optimized prompts consistently improve the performance of a frozen target LLM (e.g., GPT-4o-mini). We further show that moderate diffusion step counts provide the best balance between refinement quality and stability. These results highlight diffusion-based prompt optimization as a general, model-agnostic, and scalable approach for enhancing LLM performance through iterative prompt refinement.

Rithesh Murthy Shelby Heinecke Silvio Savarese Caiming Xiong Jielin Qiu +6
1 Citations