Y

Yu-Kai Guo

Total Citations
0
h-index
0
Papers
1

Publications

#1 2603.09714v1 Mar 10, 2026

MUGEN: Evaluating and Improving Multi-audio Understanding of Large Audio-Language Models

While multi-audio understanding is critical for large audio-language models (LALMs), it remains underexplored. We introduce MUGEN, a comprehensive benchmark evaluating this capability across speech, general audio, and music. Our experiments reveal consistent weaknesses in multi-audio settings, and performance degrades sharply as the number of concurrent audio inputs increases, identifying input scaling as a fundamental bottleneck. We further investigate training-free strategies and observe that Audio-Permutational Self-Consistency, which diversifies the order of audio candidates, helps models form more robust aggregated predictions, yielding up to 6.28% accuracy gains. Combining this permutation strategy with Chain-of-Thought further improves performance to 6.74%. These results expose blind spots in current LALMs and provide a foundation for evaluating complex auditory comprehension.

Chih-Kai Yang Yun-Shao Tsai Yu-Kai Guo Ping-Le Tsai Yen-Ting Piao +5
1 Citations