
Sign up to save your podcasts
Or


This paper critiques GRPO's bias in training language models for theorem proving and introduces the unlikeliness reward to enhance performance and sample diversity, achieving competitive results.
https://arxiv.org/abs//2506.02355
YouTube: https://www.youtube.com/@ArxivPapers
TikTok: https://www.tiktok.com/@arxiv_papers
Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016
Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers
By Igor Melnyk5
33 ratings
This paper critiques GRPO's bias in training language models for theorem proving and introduces the unlikeliness reward to enhance performance and sample diversity, achieving competitive results.
https://arxiv.org/abs//2506.02355
YouTube: https://www.youtube.com/@ArxivPapers
TikTok: https://www.tiktok.com/@arxiv_papers
Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016
Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers

965 Listeners

1,936 Listeners

435 Listeners

112,408 Listeners

9,927 Listeners

5,512 Listeners

219 Listeners

49 Listeners

94 Listeners

467 Listeners