
Sign up to save your podcasts
Or


Epistemic status: I'm pretty sure AI will alarm the public enough to change the alignment challenge substantially. I offer my mainline scenario as an intuition pump, but I expect it to be wrong in many ways, some important. Abstract arguments are in the Race Conditions and concluding sections.
Nora has a friend in her phone. Her mom complains about her new AI "colleagues." Things have gone much as expected in late 2025; transformative AGI isn't here yet, and LLM agents have gone from useless to merely incompetent.
Nora thinks her AI friend is fun. Her parents think it's healthy and educational. Their friends think it's dangerous and creepy, but their kids are sneaking sleazy AI boyfriends. All of them know people who fear losing their job to AI.
Humanity is meeting a new species, and most of us dislike and distrust it.
This could shift the playing field for alignment dramatically. Or takeover-capable AGI like Agent-4 from AI 2027 could be deployed before public fears impact policy and decisions.
Alarming incompetence
Public attitudes toward AI have transformed like they did for COVID between February and March of 2020.
The risks and opportunities seem much more immediate [...]
---
Outline:
(01:21) Alarming incompetence
(04:07) Race conditions
(06:39) Incompetent AI spreads alarm by default
(10:24) Resonances on the public stage
(13:00) Impacts on risk awareness, funding, and policy.
(14:51) Concluding thoughts and questions
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
By LessWrongEpistemic status: I'm pretty sure AI will alarm the public enough to change the alignment challenge substantially. I offer my mainline scenario as an intuition pump, but I expect it to be wrong in many ways, some important. Abstract arguments are in the Race Conditions and concluding sections.
Nora has a friend in her phone. Her mom complains about her new AI "colleagues." Things have gone much as expected in late 2025; transformative AGI isn't here yet, and LLM agents have gone from useless to merely incompetent.
Nora thinks her AI friend is fun. Her parents think it's healthy and educational. Their friends think it's dangerous and creepy, but their kids are sneaking sleazy AI boyfriends. All of them know people who fear losing their job to AI.
Humanity is meeting a new species, and most of us dislike and distrust it.
This could shift the playing field for alignment dramatically. Or takeover-capable AGI like Agent-4 from AI 2027 could be deployed before public fears impact policy and decisions.
Alarming incompetence
Public attitudes toward AI have transformed like they did for COVID between February and March of 2020.
The risks and opportunities seem much more immediate [...]
---
Outline:
(01:21) Alarming incompetence
(04:07) Race conditions
(06:39) Incompetent AI spreads alarm by default
(10:24) Resonances on the public stage
(13:00) Impacts on risk awareness, funding, and policy.
(14:51) Concluding thoughts and questions
---
First published:
Source:
---
Narrated by TYPE III AUDIO.

26,369 Listeners

2,425 Listeners

8,947 Listeners

4,149 Listeners

92 Listeners

1,590 Listeners

9,918 Listeners

90 Listeners

74 Listeners

5,470 Listeners

16,085 Listeners

536 Listeners

130 Listeners

94 Listeners

507 Listeners