
Sign up to save your podcasts
Or


Epistemic status: I'm pretty sure AI will alarm the public enough to change the alignment challenge substantially. I offer my mainline scenario as an intuition pump, but I expect it to be wrong in many ways, some important. Abstract arguments are in the Race Conditions and concluding sections.
Nora has a friend in her phone. Her mom complains about her new AI "colleagues." Things have gone much as expected in late 2025; transformative AGI isn't here yet, and LLM agents have gone from useless to merely incompetent.
Nora thinks her AI friend is fun. Her parents think it's healthy and educational. Their friends think it's dangerous and creepy, but their kids are sneaking sleazy AI boyfriends. All of them know people who fear losing their job to AI.
Humanity is meeting a new species, and most of us dislike and distrust it.
This could shift the playing field for alignment dramatically. Or takeover-capable AGI like Agent-4 from AI 2027 could be deployed before public fears impact policy and decisions.
Alarming incompetence
Public attitudes toward AI have transformed like they did for COVID between February and March of 2020.
The risks and opportunities seem much more immediate [...]
---
Outline:
(01:21) Alarming incompetence
(04:07) Race conditions
(06:39) Incompetent AI spreads alarm by default
(10:24) Resonances on the public stage
(13:00) Impacts on risk awareness, funding, and policy.
(14:51) Concluding thoughts and questions
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
By LessWrongEpistemic status: I'm pretty sure AI will alarm the public enough to change the alignment challenge substantially. I offer my mainline scenario as an intuition pump, but I expect it to be wrong in many ways, some important. Abstract arguments are in the Race Conditions and concluding sections.
Nora has a friend in her phone. Her mom complains about her new AI "colleagues." Things have gone much as expected in late 2025; transformative AGI isn't here yet, and LLM agents have gone from useless to merely incompetent.
Nora thinks her AI friend is fun. Her parents think it's healthy and educational. Their friends think it's dangerous and creepy, but their kids are sneaking sleazy AI boyfriends. All of them know people who fear losing their job to AI.
Humanity is meeting a new species, and most of us dislike and distrust it.
This could shift the playing field for alignment dramatically. Or takeover-capable AGI like Agent-4 from AI 2027 could be deployed before public fears impact policy and decisions.
Alarming incompetence
Public attitudes toward AI have transformed like they did for COVID between February and March of 2020.
The risks and opportunities seem much more immediate [...]
---
Outline:
(01:21) Alarming incompetence
(04:07) Race conditions
(06:39) Incompetent AI spreads alarm by default
(10:24) Resonances on the public stage
(13:00) Impacts on risk awareness, funding, and policy.
(14:51) Concluding thoughts and questions
---
First published:
Source:
---
Narrated by TYPE III AUDIO.

26,330 Listeners

2,453 Listeners

8,557 Listeners

4,182 Listeners

93 Listeners

1,601 Listeners

9,927 Listeners

95 Listeners

511 Listeners

5,512 Listeners

15,931 Listeners

545 Listeners

131 Listeners

94 Listeners

467 Listeners