
Sign up to save your podcasts
Or


This paper argues for controlling randomization in evaluating large language models, showing that coupled autoregressive generation can yield different rankings than vanilla methods, despite fewer required samples.
https://arxiv.org/abs//2502.01754
YouTube: https://www.youtube.com/@ArxivPapers
TikTok: https://www.tiktok.com/@arxiv_papers
Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016
Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers
By Igor Melnyk5
33 ratings
This paper argues for controlling randomization in evaluating large language models, showing that coupled autoregressive generation can yield different rankings than vanilla methods, despite fewer required samples.
https://arxiv.org/abs//2502.01754
YouTube: https://www.youtube.com/@ArxivPapers
TikTok: https://www.tiktok.com/@arxiv_papers
Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016
Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers

963 Listeners

1,929 Listeners

432 Listeners

112,236 Listeners

9,938 Listeners

5,509 Listeners

216 Listeners

49 Listeners

93 Listeners

465 Listeners