Z

Ziwei Fan

Total Citations
34
h-index
2
Papers
1

Publications

#1 2604.15153v1 Apr 16, 2026

Compressing Sequences in the Latent Embedding Space: $K$-Token Merging for Large Language Models

Large Language Models (LLMs) incur significant computational and memory costs when processing long prompts, as full self-attention scales quadratically with input length. Token compression aims to address this challenge by reducing the number of tokens representing inputs. However, existing prompt-compression approaches primarily operate in token space and overlook inefficiencies in the latent embedding space. In this paper, we propose K-Token Merging, a latent-space compression framework that merges each contiguous block of K token embeddings into a single embedding via a lightweight encoder. The compressed sequence is processed by a LoRA-adapted LLM, while generation remains in the original vocabulary. Experiments on structural reasoning (Textualized Tree), sentiment classification (Amazon Reviews), and code editing (CommitPackFT) show that K-Token Merging lies on the Pareto frontier of performance vs. compression, achieving up to 75% input length reduction with minimal performance degradation.

Hao Wang John Harvill Zihao Xu Ziwei Fan Yizhou Sun +1
0 Citations