2601.05296v1 Jan 08, 2026 cs.LG

MoEBlaze: 최신 GPU에서 효율적인 MoE 학습을 위한 메모리 병목 현상 해소

MoEBlaze: Breaking the Memory Wall for Efficient MoE Training on Modern GPUs

Jiyuan Zhang
Jiyuan Zhang
Citations: 108
h-index: 3
Yining Liu
Yining Liu
Citations: 297
h-index: 5
Siqi Yan
Siqi Yan
Citations: 1,091
h-index: 1
Li Deng
Li Deng
Citations: 23
h-index: 2
Jennifer Cao
Jennifer Cao
Citations: 57
h-index: 3
Shuqi Yang
Shuqi Yang
Citations: 15
h-index: 1
Min Ni
Min Ni
Citations: 27
h-index: 1
Bi Xue
Bi Xue
Citations: 3
h-index: 1
Shen Li
Shen Li
Citations: 126
h-index: 4

최신 대규모 Mixture-of-Experts (MoE) 아키텍처에서

Original Abstract

The pervasive "memory wall" bottleneck is significantly amplified in modern large-scale Mixture-of-Experts (MoE) architectures. MoE's inherent architectural sparsity leads to sparse arithmetic compute and also introduces substantial activation memory overheads -- driven by large token routing buffers and the need to materialize and buffer intermediate tensors. This memory pressure limits the maximum batch size and sequence length that can fit on GPUs, and also results in excessive data movements that hinders performance and efficient model scaling. We present MoEBlaze, a memory-efficient MoE training framework that addresses these issues through a co-designed system approach: (i) an end-to-end token dispatch and MoE training method with optimized data structures to eliminate intermediate buffers and activation materializing, and (ii) co-designed kernels with smart activation checkpoint to mitigate memory footprint while simultaneously achieving better performance. We demonstrate that MoEBlaze can achieve over 4x speedups and over 50% memory savings compared to existing MoE frameworks.

1 Citations
0 Influential
2.5 Altmetric
13.5 Score
Original PDF

No Analysis Report Yet

This paper hasn't been analyzed by Gemini yet.

Log in to request an AI analysis.

댓글

댓글을 작성하려면 로그인하세요.

아직 댓글이 없습니다. 첫 번째 댓글을 남겨보세요!