
Sign up to save your podcasts
Or


The paper evaluates unlearning techniques in Large Language Models, revealing that current methods inadequately remove sensitive information, allowing attackers to recover significant pre-unlearning accuracy.
https://arxiv.org/abs//2410.08827
YouTube: https://www.youtube.com/@ArxivPapers
TikTok: https://www.tiktok.com/@arxiv_papers
Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016
Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers
By Igor Melnyk5
33 ratings
The paper evaluates unlearning techniques in Large Language Models, revealing that current methods inadequately remove sensitive information, allowing attackers to recover significant pre-unlearning accuracy.
https://arxiv.org/abs//2410.08827
YouTube: https://www.youtube.com/@ArxivPapers
TikTok: https://www.tiktok.com/@arxiv_papers
Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016
Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers

960 Listeners

1,930 Listeners

432 Listeners

112,200 Listeners

9,925 Listeners

5,512 Listeners

211 Listeners

49 Listeners

93 Listeners

466 Listeners