D

Diego Socolinsky

Total Citations
218
h-index
3
Papers
1

Publications

#1 2603.00993v1 Mar 01, 2026

CollabEval: Enhancing LLM-as-a-Judge via Multi-Agent Collaboration

Large Language Models (LLMs) have revolutionized AI-generated content evaluation, with the LLM-as-a-Judge paradigm becoming increasingly popular. However, current single-LLM evaluation approaches face significant challenges, including inconsistent judgments and inherent biases from pre-training data. To address these limitations, we propose CollabEval, a novel multi-agent evaluation framework that implements a three-phase Collaborative Evaluation process: initial evaluation, multi-round discussion, and final judgment. Unlike existing approaches that rely on competitive debate or single-model evaluation, CollabEval emphasizes collaboration among multiple agents with strategic consensus checking for efficiency. Our extensive experiments demonstrate that CollabEval consistently outperforms single-LLM approaches across multiple dimensions while maintaining robust performance even when individual models struggle. The framework provides comprehensive support for various evaluation criteria while ensuring efficiency through its collaborative design.

Yiyu Qian Shinan Zhang Yun Zhou Haibo Ding Diego Socolinsky +1
0 Citations