
Sign up to save your podcasts
Or


Research challenges the use of prompt tuning in Continual Learning (CL) methods, finding it hinders performance. Replacing it with LoRA improves accuracy across benchmarks, emphasizing the need for rigorous ablations.
https://arxiv.org/abs//2406.03216
YouTube: https://www.youtube.com/@ArxivPapers
TikTok: https://www.tiktok.com/@arxiv_papers
Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016
Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers
By Igor Melnyk5
33 ratings
Research challenges the use of prompt tuning in Continual Learning (CL) methods, finding it hinders performance. Replacing it with LoRA improves accuracy across benchmarks, emphasizing the need for rigorous ablations.
https://arxiv.org/abs//2406.03216
YouTube: https://www.youtube.com/@ArxivPapers
TikTok: https://www.tiktok.com/@arxiv_papers
Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016
Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers

953 Listeners

1,942 Listeners

436 Listeners

112,250 Listeners

10,020 Listeners

5,522 Listeners

212 Listeners

51 Listeners

93 Listeners

472 Listeners