R

Raman Saparkhan

Total Citations
0
h-index
0
Papers
1

Publications

#1 2604.17433v1 Apr 19, 2026

Self-Consistency from Only Two Samples: CoT-PoT Ensembling for Efficient LLM Reasoning

Self-consistency (SC) is a popular technique for improving the reasoning accuracy of large language models by aggregating multiple sampled outputs, but it comes at a high computational cost due to extensive sampling. We introduce a hybrid ensembling approach that leverages the complementary strengths of two distinct modes of reasoning: Chain-of-Thought (CoT) and Program-of-Thought (PoT). We describe a general framework for combining these two forms of reasoning in self-consistency, as well as particular strategies for both full sampling and early-stopping. We show that CoT-PoT ensembling not only improves overall accuracy, but also drastically reduces the number of samples required for SC by a factor of 9.3x. In particular, the majority of tasks (78.6%) can be addressed with only two samples, which has not been possible with any prior SC methods.

Majd Hawasly Mohammad Raza Raman Saparkhan Md. Rizwan Parvez
0 Citations