
Sign up to save your podcasts
Or


This paper recasts the complex offline RL problem as standard supervised fine-tuning (SFT) techniques that directly optimizes for rewards. Authors show that their method empirically outperforms state-of-the-art baselines such as SFT and Direct Preference Optimization (DPO) across various QA benchmarks. The experiments focus on fixed-horizon conversational policies where the agent either reasons about answers or asks clarifying questions, demonstrating that directly optimizing the reward signal leads to superior accuracy and language quality metrics.
By Enoch H. KangThis paper recasts the complex offline RL problem as standard supervised fine-tuning (SFT) techniques that directly optimizes for rewards. Authors show that their method empirically outperforms state-of-the-art baselines such as SFT and Direct Preference Optimization (DPO) across various QA benchmarks. The experiments focus on fixed-horizon conversational policies where the agent either reasons about answers or asks clarifying questions, demonstrating that directly optimizing the reward signal leads to superior accuracy and language quality metrics.