Introduces Adaptive Branching Monte Carlo Tree Search (AB-MCTS), a novel framework for enhancing Large Language Model (LLM) inference-time performance.
Unlike traditional LLM scaling through increased training, AB-MCTS focuses on optimizing how a pre-trained LLM is used during problem-solving by intelligently allocating computational resources.
It tackles the fundamental exploration-exploitation dilemma in multi-answer generation by adapting classical MCTS to LLMs' unbounded generative capacity, primarily through the introduction of a GEN node and Thompson sampling for decision-making.
The text details two variants, AB-MCTS-M (statistically sophisticated) and AB-MCTS-A (computationally efficient), and presents empirical validation across diverse benchmarks, demonstrating its superior and adaptive performance compared to simpler baselines.
Finally, it explores practical applications, such as emergent multi-LLM collaboration, and discusses implementation challenges and future research directions, including the potential for self-improvement through integrating search data with LLM fine-tuning.