Preference Optimization

ASFT: Aligned Supervised Fine-Tuning through Absolute Likelihood


Listen Later

This paper proposes a new method for fine-tuning large language models (LLMs) called Aligned Supervised Fine-Tuning (ASFT). ASFT addresses limitations of existing Direct Preference Optimization (DPO) methods by optimizing the absolute likelihood of generating human-preferred responses rather than relying on relative likelihoods. Unlike DPO, ASFT does not require a reference model and is less sensitive to the initial state of the model, leading to more efficient and robust training. The authors demonstrate the effectiveness of ASFT through extensive experiments on various benchmark datasets, showing significant performance improvements compared to existing methods.

...more
View all episodesView all episodes
Download on the App Store

Preference OptimizationBy SaiKrishna Rallabandi