Z

Zeyu Yang

Total Citations
36
h-index
3
Papers
1

Publications

#1 2602.07397v1 Feb 07, 2026

Scout Before You Attend: Sketch-and-Walk Sparse Attention for Efficient LLM Inference

Self-attention dominates the computational and memory cost of long-context LLM inference across both prefill and decode phases. To address this challenge, we introduce Sketch&Walk Attention, a training-free sparse attention method that determines sparsity with lightweight sketches and deterministic walk. Sketch&Walk applies Hadamard sketching to get inexpensive approximations of attention scores, then aggregates these estimates across layers via a walk mechanism that captures attention influence beyond direct interactions between tokens. The accumulated walk scores are used to select top-k attention blocks, enabling dynamic sparsity with a single training-free algorithm that applies uniformly to both the prefill and decode phases, together with custom sparse attention kernels. Across a wide range of models and tasks, Sketch&Walk maintains near-lossless accuracy at 20% attention density and can slightly outperform dense attention in some settings, while achieving up to 6x inference speedup.

Hoang Anh Le Sahil Joshi Zeyu Yang Zhaozhuo Xu Anshumali Shrivastava Rice University +1
0 Citations