
Sign up to save your podcasts
Or


Language model alignment with human preferences is crucial. Decoding-time realignment (DeRa) offers an efficient method to explore and evaluate regularization strengths in aligned models without retraining.
https://arxiv.org/abs//2402.02992
YouTube: https://www.youtube.com/@ArxivPapers
TikTok: https://www.tiktok.com/@arxiv_papers
Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016
Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers
By Igor Melnyk5
33 ratings
Language model alignment with human preferences is crucial. Decoding-time realignment (DeRa) offers an efficient method to explore and evaluate regularization strengths in aligned models without retraining.
https://arxiv.org/abs//2402.02992
YouTube: https://www.youtube.com/@ArxivPapers
TikTok: https://www.tiktok.com/@arxiv_papers
Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016
Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers

953 Listeners

1,957 Listeners

436 Listeners

112,484 Listeners

10,038 Listeners

5,527 Listeners

211 Listeners

51 Listeners

92 Listeners

473 Listeners