
Sign up to save your podcasts
Or
This and all episodes at: https://aiandyou.net/ .
Roman has been central in the field of warning about the Control Problem and Value Alignment Problems of AI from the very beginning, back when doing so earned people some scorn from practitioners, yet Roman is a professor of computer science and applies rigorous methods to his analyses of these problems. It’s those rigorous methods that we tap into in this interview, because Roman connects principles of computer science to the issue of existential risk from AI.
In this part we talk about why this work is important to Roman, the dimensions of the elements of unexplainability, unpredictability, and uncontrollability, the level of urgency of the problems, and drill down into why today’s AI is not safe and why it’s getting worse.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
5
88 ratings
This and all episodes at: https://aiandyou.net/ .
Roman has been central in the field of warning about the Control Problem and Value Alignment Problems of AI from the very beginning, back when doing so earned people some scorn from practitioners, yet Roman is a professor of computer science and applies rigorous methods to his analyses of these problems. It’s those rigorous methods that we tap into in this interview, because Roman connects principles of computer science to the issue of existential risk from AI.
In this part we talk about why this work is important to Roman, the dimensions of the elements of unexplainability, unpredictability, and uncontrollability, the level of urgency of the problems, and drill down into why today’s AI is not safe and why it’s getting worse.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
7,901 Listeners
2,141 Listeners
899 Listeners
26,469 Listeners
298 Listeners
217 Listeners
198 Listeners
114 Listeners
9,207 Listeners
417 Listeners
5,448 Listeners
3,286 Listeners
75 Listeners
485 Listeners
248 Listeners