Y

Yutao Yue

Total Citations
95
h-index
5
Papers
2

Publications

#1 2604.18607v1 Apr 12, 2026

TurboEvolve: Towards Fast and Robust LLM-Driven Program Evolution

LLM-driven program evolution can discover high-quality programs, but its cost and run-to-run variance hinder reliable progress. We propose TurboEvolve, a multi-island evolutionary framework that improves sample efficiency and robustness under fixed evaluation budgets. Inspired by the multiple-offspring strategy in evolutionary algorithms, TurboEvolve introduces verbalized Sampling, prompting the LLM to emit K diverse candidates with explicit self-assigned sampling weights, and an online scheduler that adapts K to expand exploration under stagnation and reduce overhead during steady progress. To exploit existing solution pools, we further propose "seed-pool injection," which clusters seeds and assigns them across islands with controlled perturbations and elitist preservation to balance diversity and refinement. Across multiple program-optimization benchmarks, TurboEvolve consistently achieves stronger performance at lower budgets and improves best-known solutions on several tasks.

Yutao Yue Menglin Yang Yang Yang Zining Zhong Jindong Li +3
0 Citations
#2 2601.08519v1 Jan 13, 2026

CD^2: Constrained Dataset Distillation for Few-Shot Class-Incremental Learning

Few-shot class-incremental learning (FSCIL) receives significant attention from the public to perform classification continuously with a few training samples, which suffers from the key catastrophic forgetting problem. Existing methods usually employ an external memory to store previous knowledge and treat it with incremental classes equally, which cannot properly preserve previous essential knowledge. To solve this problem and inspired by recent distillation works on knowledge transfer, we propose a framework termed \textbf{C}onstrained \textbf{D}ataset \textbf{D}istillation (\textbf{CD$^2$}) to facilitate FSCIL, which includes a dataset distillation module (\textbf{DDM}) and a distillation constraint module~(\textbf{DCM}). Specifically, the DDM synthesizes highly condensed samples guided by the classifier, forcing the model to learn compacted essential class-related clues from a few incremental samples. The DCM introduces a designed loss to constrain the previously learned class distribution, which can preserve distilled knowledge more sufficiently. Extensive experiments on three public datasets show the superiority of our method against other state-of-the-art competitors.

Kexin Bao Yong Li Daichi Zhang Hansong Zhang Yutao Yue +1
0 Citations