[행사/세미나] (25.11.18) Advancing LLM’s capabilities to aggregate its own reasoning (Dr. Ilia Kulikov @ Meta FAIR)
- 실감미디어공학과
- 조회수200
- 2025-11-14
안녕하세요.
아래와 같이 온라인 세미나가 개최되오니 많은 관심과 참여 바랍니다.
Advancing LLM’s capabilities to aggregate its own reasoning
Speaker: Dr. Ilia Kulikov @ Meta FAIR
Time : 10:00 - 11:00, Nov 18th, 2025
Location: Online (https://hli.skku.edu/InvitedTalk251118 )
Language: English speech & English slides
Abstract:
As large language models tackle increasingly complex reasoning tasks, scaling test-time computation through generating and aggregating multiple solution candidates has emerged as a key paradigm for improvement. However, traditional aggregation methods like majority voting often fail to fully exploit the information contained in diverse reasoning traces. In this talk, I will present our recent work on AggLM, where we reframe aggregation as an explicit reasoning skill that can be learned through reinforcement learning. Our approach trains models to review, reconcile, and synthesize correct answers from candidate solutions, effectively recovering minority-correct answers while maintaining strong performance on majority-correct cases. I will demonstrate how this learned aggregation outperforms both rule-based and reward-model baselines across multiple benchmarks, generalizes to solutions from stronger models, and achieves better efficiency than majority voting. Finally, I will discuss our ongoing work that extends beyond single-step aggregation, exploring new directions for enhancing LLMs' meta-reasoning capabilities.
Bio:
Dr. Ilia Kulikov is a research scientist at FAIR. He obtained his PhD from New York University in 2022, where he was supervised by Kyunghyun Cho and Jason Weston. Their team (RAM) works on advancing algorithms, data, and models that enable self-improving capabilities of LLMs.
발전기금






