
Sign up to save your podcasts
Or
This is a snapshot of a new page on the AI Impacts Wiki.
We’ve made a list of arguments1 that AI poses an existential risk to humanity. These are intended as summaries—they may not contain the necessary detail to be compelling, or to satisfyingly represent the position. We’d love to hear how you feel about them in the comments and polls.
Competent non-aligned agents
Humans increasingly lose games to the best AI systems. If AI systems become similarly adept at navigating the real world, will humans also lose out? (Image: Midjourney)Summary:
Humans will build AI systems that are 'agents', i.e. they will autonomously pursue goals
Humans won’t figure out how to make systems with goals that are compatible with human welfare and realizing human values
Such systems will be built or selected to be highly competent, and so gain the power to achieve their [...]
---
Outline:
(00:27) Competent non-aligned agents
(02:07) Second species argument
(04:00) Loss of control via inferiority
(05:39) Loss of control via speed
(07:13) Human non-alignment
(08:37) Catastrophic tools
(10:11) Powerful black boxes
(11:47) Multi-agent dynamics
(13:01) Large impacts
(14:32) Expert opinion
---
First published:
Source:
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
This is a snapshot of a new page on the AI Impacts Wiki.
We’ve made a list of arguments1 that AI poses an existential risk to humanity. These are intended as summaries—they may not contain the necessary detail to be compelling, or to satisfyingly represent the position. We’d love to hear how you feel about them in the comments and polls.
Competent non-aligned agents
Humans increasingly lose games to the best AI systems. If AI systems become similarly adept at navigating the real world, will humans also lose out? (Image: Midjourney)Summary:
Humans will build AI systems that are 'agents', i.e. they will autonomously pursue goals
Humans won’t figure out how to make systems with goals that are compatible with human welfare and realizing human values
Such systems will be built or selected to be highly competent, and so gain the power to achieve their [...]
---
Outline:
(00:27) Competent non-aligned agents
(02:07) Second species argument
(04:00) Loss of control via inferiority
(05:39) Loss of control via speed
(07:13) Human non-alignment
(08:37) Catastrophic tools
(10:11) Powerful black boxes
(11:47) Multi-agent dynamics
(13:01) Large impacts
(14:32) Expert opinion
---
First published:
Source:
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
26,409 Listeners
2,387 Listeners
7,908 Listeners
4,131 Listeners
87 Listeners
1,457 Listeners
9,042 Listeners
87 Listeners
388 Listeners
5,432 Listeners
15,201 Listeners
474 Listeners
122 Listeners
75 Listeners
454 Listeners