
Sign up to save your podcasts
Or


In order to trust machines with important jobs, we need a high level of confidence that they share our values and goals. Recent work shows that this “alignment” can be brittle, superficial, even unstable. In one study, a few training adjustments led a popular chatbot to recommend murder. On this episode, contributing writer Stephen Ornes tells host Samir Patel about what this research reveals.
Audio coda from The National Archives and Records Administration.
By Quanta Magazine4.7
516516 ratings
In order to trust machines with important jobs, we need a high level of confidence that they share our values and goals. Recent work shows that this “alignment” can be brittle, superficial, even unstable. In one study, a few training adjustments led a popular chatbot to recommend murder. On this episode, contributing writer Stephen Ornes tells host Samir Patel about what this research reveals.
Audio coda from The National Archives and Records Administration.

755 Listeners

944 Listeners

325 Listeners

839 Listeners

565 Listeners

235 Listeners

818 Listeners

1,065 Listeners

4,163 Listeners

2,370 Listeners

504 Listeners

253 Listeners

323 Listeners

20 Listeners

385 Listeners

497 Listeners