Y

Yifeng Gao

Total Citations
21
h-index
2
Papers
1

Publications

#1 2604.18103v1 Apr 20, 2026

Stability Implies Redundancy: Delta Attention Selective Halting for Efficient Long-Context Prefilling

Prefilling computational costs pose a significant bottleneck for Large Language Models (LLMs) and Large Multimodal Models (LMMs) in long-context settings. While token pruning reduces sequence length, prior methods rely on heuristics that break compatibility with hardware-efficient kernels like FlashAttention. In this work, we observe that tokens evolve toward \textit{semantic fixing points}, making further processing redundant. To this end, we introduce Delta Attention Selective Halting (DASH), a training-free policy that monitors the layer-wise update dynamics of the self-attention mechanism to selectively halt stabilized tokens. Extensive evaluation confirms that DASH generalizes across language and vision benchmarks, delivering significant prefill speedups while preserving model accuracy and hardware efficiency. Code will be released at https://github.com/verach3n/DASH.git.

Shaobo Wang Linfeng Zhang Yujie Chen Tai-You Chen Yifeng Gao +2
0 Citations