
Sign up to save your podcasts
Or


The paper investigates whether current safety training techniques can detect and remove deceptive behavior in AI systems. The study finds that backdoored behavior can persist in large language models, even with standard safety training techniques, and adversarial training can hide the unsafe behavior.
https://arxiv.org/abs//2401.05566
YouTube: https://www.youtube.com/@ArxivPapers
TikTok: https://www.tiktok.com/@arxiv_papers
Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016
Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers
By Igor Melnyk5
33 ratings
The paper investigates whether current safety training techniques can detect and remove deceptive behavior in AI systems. The study finds that backdoored behavior can persist in large language models, even with standard safety training techniques, and adversarial training can hide the unsafe behavior.
https://arxiv.org/abs//2401.05566
YouTube: https://www.youtube.com/@ArxivPapers
TikTok: https://www.tiktok.com/@arxiv_papers
Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016
Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers

951 Listeners

1,964 Listeners

439 Listeners

112,586 Listeners

10,043 Listeners

5,531 Listeners

213 Listeners

51 Listeners

93 Listeners

473 Listeners