
Sign up to save your podcasts
Or


Fan favorite Dr. Steven Byrnes returns to discuss recent AI progress and the concerning paradigm shift to "ruthless sociopath AI" he sees on the horizon.
Steven Byrnes, UC Berkeley physics PhD and Harvard physics postdoc, is an AI safety researcher at the Astera Institute and one of the most rigorous thinkers working on the technical AI alignment problem.
Timestamps
00:00:00 — Cold Open
00:00:48 — Welcoming Back the Returning Champion
00:02:38 — Research Update: What's New in The Last 6 Months
00:04:31 — The Rise of AI Agents
00:07:49 — What's Your P(Doom)?™
00:13:42 — "Brain-Like AGI": The Next Generation of AI
00:17:01 — Can LLMs Ever Match the Human Brain?
00:31:51 — Will AI Kill Us Before It Takes Our Jobs?
00:36:12 — Country of Geniuses in a Data Center
00:41:34 — Why We Should Expect "Ruthless Sociopathic" ASI
00:54:15 — Post-Training & RLVR — A "Thin Layer" of Real Intelligence
01:02:32 — Consequentialism and the Path to Superintelligence
01:17:02 — Airplanes vs. Rockets: An Analogy for AI
01:24:33 — FOOM and Recursive Self-Improvement
Links
Steven Byrnes’ Website & Research— https://sjbyrnes.com/
Steve’s X—https://x.com/steve47285
Astera Institute—https://astera.org/
“Why We Should Expect Ruthless Sociopath ASI” — https://www.lesswrong.com/posts/ZJZZEuPFKeEdkrRyf/why-we-should-expect-ruthless-sociopath-asi
Intro to Brain-Like-AGI Safety—https://www.alignmentforum.org/s/HzcM2dkCq7fwXBej8
Steve on LessWrong—https://www.lesswrong.com/users/steve2152
AI 2027 — Scenario Timeline — https://ai-2027.com/
Part 1: “The Man Who Might SOLVE AI Alignment”—
https://www.youtube.com/watch?v=_ZRUq3VEAc0
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏
By Liron Shapira4.3
1414 ratings
Fan favorite Dr. Steven Byrnes returns to discuss recent AI progress and the concerning paradigm shift to "ruthless sociopath AI" he sees on the horizon.
Steven Byrnes, UC Berkeley physics PhD and Harvard physics postdoc, is an AI safety researcher at the Astera Institute and one of the most rigorous thinkers working on the technical AI alignment problem.
Timestamps
00:00:00 — Cold Open
00:00:48 — Welcoming Back the Returning Champion
00:02:38 — Research Update: What's New in The Last 6 Months
00:04:31 — The Rise of AI Agents
00:07:49 — What's Your P(Doom)?™
00:13:42 — "Brain-Like AGI": The Next Generation of AI
00:17:01 — Can LLMs Ever Match the Human Brain?
00:31:51 — Will AI Kill Us Before It Takes Our Jobs?
00:36:12 — Country of Geniuses in a Data Center
00:41:34 — Why We Should Expect "Ruthless Sociopathic" ASI
00:54:15 — Post-Training & RLVR — A "Thin Layer" of Real Intelligence
01:02:32 — Consequentialism and the Path to Superintelligence
01:17:02 — Airplanes vs. Rockets: An Analogy for AI
01:24:33 — FOOM and Recursive Self-Improvement
Links
Steven Byrnes’ Website & Research— https://sjbyrnes.com/
Steve’s X—https://x.com/steve47285
Astera Institute—https://astera.org/
“Why We Should Expect Ruthless Sociopath ASI” — https://www.lesswrong.com/posts/ZJZZEuPFKeEdkrRyf/why-we-should-expect-ruthless-sociopath-asi
Intro to Brain-Like-AGI Safety—https://www.alignmentforum.org/s/HzcM2dkCq7fwXBej8
Steve on LessWrong—https://www.lesswrong.com/users/steve2152
AI 2027 — Scenario Timeline — https://ai-2027.com/
Part 1: “The Man Who Might SOLVE AI Alignment”—
https://www.youtube.com/watch?v=_ZRUq3VEAc0
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏

26,395 Listeners

4,273 Listeners

2,450 Listeners

2,271 Listeners

377 Listeners

1,624 Listeners

571 Listeners

101 Listeners

562 Listeners

3,333 Listeners

8,464 Listeners

148 Listeners

9 Listeners

914 Listeners

1,170 Listeners