H

Hongyu Cao

Total Citations
75
h-index
6
Papers
1

Publications

#1 2603.20899v1 Mar 21, 2026

Mitigating Shortcut Reasoning in Language Models: A Gradient-Aware Training Approach

Large language models exhibit strong reasoning capabilities, yet often rely on shortcuts such as surface pattern matching and answer memorization rather than genuine logical inference. We propose Shortcut-Aware Reasoning Training (SART), a gradient-aware framework that detects and mitigates shortcut-promoting samples via ShortcutScore and gradient surgery. Our method identifies shortcut signals through gradient misalignment with validation objectives and answer-token concentration, and modifies training dynamics accordingly. Experiments on controlled reasoning benchmarks show that SART achieves +16.5% accuracy and +40.2% robustness over the strongest baseline, significantly improving generalization under distribution shifts. Code is available at: https://github.com/fuyanjie/short-cut-aware-data-centric-reasoning.

Kunpeng Liu Yanjie Fu Dongjie Wang Hongyu Cao
0 Citations