
Sign up to save your podcasts
Or


The paper compares Direct Preference Optimization (DPO) and Proximal Policy Optimization (PPO) in aligning large language models with human feedback, showing PPO outperforms DPO in various RLHF testbeds.
https://arxiv.org/abs//2404.10719
YouTube: https://www.youtube.com/@ArxivPapers
TikTok: https://www.tiktok.com/@arxiv_papers
Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016
Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers
By Igor Melnyk5
33 ratings
The paper compares Direct Preference Optimization (DPO) and Proximal Policy Optimization (PPO) in aligning large language models with human feedback, showing PPO outperforms DPO in various RLHF testbeds.
https://arxiv.org/abs//2404.10719
YouTube: https://www.youtube.com/@ArxivPapers
TikTok: https://www.tiktok.com/@arxiv_papers
Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016
Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers

956 Listeners

1,940 Listeners

437 Listeners

112,031 Listeners

9,968 Listeners

5,510 Listeners

211 Listeners

49 Listeners

92 Listeners

472 Listeners