
Sign up to save your podcasts
Or
You may have seen arguments (such as these) for why people might create and deploy advanced AI that is both power-seeking and misaligned with human interests. This may leave you thinking, “OK, but would such AI systems really pose catastrophic threats?” This document compiles arguments for the claim that misaligned, power-seeking, advanced AI would pose catastrophic risks.
We’ll see arguments for the following claims, which are mostly separate/independent reasons for concern:
Source:
https://aisafetyfundamentals.com/governance-blog/why-might-misaligned-advanced-ai-cause-catastrophe-compilation
Narrated for AGI Safety Fundamentals by Perrin Walker of TYPE III AUDIO.
---
A podcast by BlueDot Impact.
Learn more on the AI Safety Fundamentals website.
You may have seen arguments (such as these) for why people might create and deploy advanced AI that is both power-seeking and misaligned with human interests. This may leave you thinking, “OK, but would such AI systems really pose catastrophic threats?” This document compiles arguments for the claim that misaligned, power-seeking, advanced AI would pose catastrophic risks.
We’ll see arguments for the following claims, which are mostly separate/independent reasons for concern:
Source:
https://aisafetyfundamentals.com/governance-blog/why-might-misaligned-advanced-ai-cause-catastrophe-compilation
Narrated for AGI Safety Fundamentals by Perrin Walker of TYPE III AUDIO.
---
A podcast by BlueDot Impact.
Learn more on the AI Safety Fundamentals website.