
Sign up to save your podcasts
Or
This and all episodes at: https://aiandyou.net/ .
Roman has been central in the field of warning about the Control Problem and Value Alignment Problems of AI from the very beginning, back when doing so earned people some scorn from practitioners, yet Roman is a professor of computer science and applies rigorous methods to his analyses of these problems. It’s those rigorous methods that we tap into in this interview, because Roman connects principles of computer science to the issue of existential risk from AI.
In this part we talk about why this work is important to Roman, the dimensions of the elements of unexplainability, unpredictability, and uncontrollability, the level of urgency of the problems, and drill down into why today’s AI is not safe and why it’s getting worse.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
5
88 ratings
This and all episodes at: https://aiandyou.net/ .
Roman has been central in the field of warning about the Control Problem and Value Alignment Problems of AI from the very beginning, back when doing so earned people some scorn from practitioners, yet Roman is a professor of computer science and applies rigorous methods to his analyses of these problems. It’s those rigorous methods that we tap into in this interview, because Roman connects principles of computer science to the issue of existential risk from AI.
In this part we talk about why this work is important to Roman, the dimensions of the elements of unexplainability, unpredictability, and uncontrollability, the level of urgency of the problems, and drill down into why today’s AI is not safe and why it’s getting worse.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
7,755 Listeners
2,113 Listeners
898 Listeners
26,334 Listeners
297 Listeners
207 Listeners
204 Listeners
112 Listeners
8,761 Listeners
353 Listeners
5,356 Listeners
3,243 Listeners
73 Listeners
427 Listeners
234 Listeners