
Sign up to save your podcasts
Or
---
client: ea_forum
project_id: curated
feed_id: ai, ai_safety, ai_safety__technical, ai_safety__governance
narrator: pw
qa: km
narrator_time: 5h
qa_time: 2h15m
---
This is going to be a list of holes I see in the basic argument for existential risk from superhuman AI systems.
To start, here’s an outline of what I take to be the basic case:
I. If superhuman AI systems are built, any given system is likely to be ‘goal-directed’
II. If goal-directed superhuman AI systems are built, their desired outcomes will probably be about as bad as an empty universe by human lights
III. If most goal-directed superhuman AI systems have bad goals, the future will very likely be bad
Original article:
https://forum.effectivealtruism.org/posts/zoWypGfXLmYsDFivk/counterarguments-to-the-basic-ai-risk-case
Narrated for the Effective Altruism Forum by TYPE III AUDIO.
Share feedback on this narration.
---
client: ea_forum
project_id: curated
feed_id: ai, ai_safety, ai_safety__technical, ai_safety__governance
narrator: pw
qa: km
narrator_time: 5h
qa_time: 2h15m
---
This is going to be a list of holes I see in the basic argument for existential risk from superhuman AI systems.
To start, here’s an outline of what I take to be the basic case:
I. If superhuman AI systems are built, any given system is likely to be ‘goal-directed’
II. If goal-directed superhuman AI systems are built, their desired outcomes will probably be about as bad as an empty universe by human lights
III. If most goal-directed superhuman AI systems have bad goals, the future will very likely be bad
Original article:
https://forum.effectivealtruism.org/posts/zoWypGfXLmYsDFivk/counterarguments-to-the-basic-ai-risk-case
Narrated for the Effective Altruism Forum by TYPE III AUDIO.
Share feedback on this narration.