
Sign up to save your podcasts
Or


Pruning up to 20% of parameters in large language models (LLMs) improves their resistance to "Jailbreaking" prompts, reducing the generation of harmful and illegal content without sacrificing performance. Pruning may also enhance other LLM behaviors and improve safety and reliability.
https://arxiv.org/abs//2401.10862
YouTube: https://www.youtube.com/@ArxivPapers
TikTok: https://www.tiktok.com/@arxiv_papers
Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016
Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers
By Igor Melnyk5
33 ratings
Pruning up to 20% of parameters in large language models (LLMs) improves their resistance to "Jailbreaking" prompts, reducing the generation of harmful and illegal content without sacrificing performance. Pruning may also enhance other LLM behaviors and improve safety and reliability.
https://arxiv.org/abs//2401.10862
YouTube: https://www.youtube.com/@ArxivPapers
TikTok: https://www.tiktok.com/@arxiv_papers
Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016
Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers

964 Listeners

1,985 Listeners

436 Listeners

112,882 Listeners

10,220 Listeners

5,535 Listeners

217 Listeners

51 Listeners

99 Listeners

475 Listeners