
Sign up to save your podcasts
Or


Did you know the makers of AI have no idea how to control their technology? They have no clue how to align it with human goals, values and ethics. You know, stuff like, don't kill humans.
This the AI safety podcast for all people, no tech background required. We focus only on the threat of human extinction from AI.
In Episode #2, The Alignment Problem, host John Sherman explores how alarmingly far AI safety researchers are from finding any way to control AI systems, much less their superintelligent children, who will arrive soon enough.
 By The AI Risk Network
By The AI Risk Network4.4
88 ratings
Did you know the makers of AI have no idea how to control their technology? They have no clue how to align it with human goals, values and ethics. You know, stuff like, don't kill humans.
This the AI safety podcast for all people, no tech background required. We focus only on the threat of human extinction from AI.
In Episode #2, The Alignment Problem, host John Sherman explores how alarmingly far AI safety researchers are from finding any way to control AI systems, much less their superintelligent children, who will arrive soon enough.

32,108 Listeners

1,065 Listeners

8,505 Listeners

213 Listeners

89 Listeners

488 Listeners

474 Listeners

186 Listeners

540 Listeners

208 Listeners

560 Listeners

135 Listeners

41 Listeners

10 Listeners

297 Listeners