The Great Simplification with Nate Hagens

Algorithmic Cancer: Why AI Development Is Not What You Think with Connor Leahy


Listen Later

Recently, the risks about Artificial Intelligence and the need for 'alignment' have been flooding our cultural discourse – with Artificial Super Intelligence acting as both the most promising goal and most pressing threat. But amid the moral debate, there's been surprisingly little attention paid to a basic question: do we even have the technical capability to guide where any of this is headed? And if not, should we slow the pace of innovation until we better understand how these complex systems actually work?

In this episode, Nate is joined by Artificial Intelligence developer and researcher, Connor Leahy, to discuss the rapid advancements in AI, the potential risks associated with its development, and the challenges of controlling these technologies as they evolve. Connor also explains the phenomenon of what he calls 'algorithmic cancer' – AI generated content that crowds out true human creations, propelled by algorithms that can't tell the difference. Together, they unpack the implications of AI acceleration, from widespread job disruption and energy-intensive computing to the concentration of wealth and power to tech companies.

What kinds of policy and regulatory approaches could help slow down AI's acceleration in order to create safer development pathways? Is there a world where AI becomes a tool to aid human work and creativity, rather than replacing it? And how do these AI risks connect to the deeper cultural conversation about technology's impacts on mental health, meaning, and societal well-being?

(Conversation recorded on May 21st, 2025)

About Connor Leahy:

Connor Leahy is the founder and CEO of Conjecture, which works on aligning artificial intelligence systems by building infrastructure that allows for the creation of scalable, auditable, and controllable AI.

Previously, he co-founded EleutherAI, which was one of the earliest and most successful open-source Large Language Model communities, as well as a home for early discussions on the risks of those same advanced AI systems. Prior to that, Connor worked as an AI researcher and engineer for Aleph Alpha GmbH.

Show Notes and More

Watch this video episode on YouTube

Want to learn the broad overview of The Great Simplification in 30 minutes? Watch our Animated Movie.

---

Support The Institute for the Study of Energy and Our Future

Join our Substack newsletter

Join our Discord channel and connect with other listeners

...more
View all episodesView all episodes
Download on the App Store

The Great Simplification with Nate HagensBy Nate Hagens

  • 4.8
  • 4.8
  • 4.8
  • 4.8
  • 4.8

4.8

380 ratings


More shows like The Great Simplification with Nate Hagens

View all
Upstream by Upstream

Upstream

1,823 Listeners

Making Sense with Sam Harris by Sam Harris

Making Sense with Sam Harris

26,386 Listeners

For The Wild by For The Wild

For The Wild

1,172 Listeners

Team Human by Douglas Rushkoff

Team Human

375 Listeners

Emergence Magazine Podcast by Emergence Magazine

Emergence Magazine Podcast

498 Listeners

Outrage + Optimism: The Climate Podcast by Persephonica and Global Optimism

Outrage + Optimism: The Climate Podcast

465 Listeners

Your Undivided Attention by The Center for Humane Technology, Tristan Harris, Daniel Barcay and Aza Raskin

Your Undivided Attention

1,595 Listeners

The Emerald by Joshua Schrei

The Emerald

1,006 Listeners

Accidental Gods by Accidental Gods

Accidental Gods

152 Listeners

Tech Won't Save Us by Paris Marx

Tech Won't Save Us

559 Listeners

Decouple by Dr. Chris Keefer

Decouple

147 Listeners

Breaking Down: Collapse by Kory & Kellan

Breaking Down: Collapse

295 Listeners

Planet: Critical by Rachel Donald

Planet: Critical

89 Listeners

The Ezra Klein Show by New York Times Opinion

The Ezra Klein Show

16,026 Listeners

The Chris Hedges Report by Chris Hedges

The Chris Hedges Report

338 Listeners