Y

Yu Cheng

Total Citations
18
h-index
2
Papers
1

Publications

#1 2602.13035v1 Feb 13, 2026

Look Inward to Explore Outward: Learning Temperature Policy from LLM Internal States via Hierarchical RL

Reinforcement Learning from Verifiable Rewards (RLVR) trains large language models (LLMs) from sampled trajectories, making decoding strategy a core component of learning rather than a purely inference-time choice. Sampling temperature directly controls the exploration--exploitation trade-off by modulating policy entropy, yet existing methods rely on static values or heuristic adaptations that are decoupled from task-level rewards. We propose Introspective LLM, a hierarchical reinforcement learning framework that learns to control sampling temperature during generation. At each decoding step, the model selects a temperature based on its hidden state and samples the next token from the resulting distribution. Temperature and token policies are jointly optimized from downstream rewards using a coordinate ascent scheme. Experiments on mathematical reasoning benchmarks show that learned temperature policies outperform fixed and heuristic baselines, while exhibiting interpretable exploration behaviors aligned with reasoning uncertainty.

Yang Li Yixiao Zhou Dongzhou Cheng Hehe Fan Yu Cheng
1 Citations