AI Frontiers

“Today’s AIs Aren’t Paperclip Maximizers. That Doesn’t Mean They’re Not Risky” by Peter N. Salib, Simon Goldstein


Listen Later

Much of the discussion around AI safety is motivated by concerns around existential risk: the idea that autonomous systems will grow smarter than humans and go on to eradicate our species, either deliberately or as an unintended consequence.

The founders of the AI safety movement took these possibilities seriously when many people still brushed them off as science fiction. Nick Bostrom's 2014 book Superintelligence, for example, explored risks and opportunities humanity might face after developing AI systems with cognitive capabilities drastically more powerful than our own.

His work built on even earlier scholarship from Stephen Omohundro, Stuart Russell, Eliezer Yudkowsky, and others whose foundational ideas were published during an era where the most advanced machine learning algorithms did things like rank search results.

These classical arguments still underlie many of the conversations in AI risk.

As forward-thinking as they were, many important details of these arguments are now behind [...]

---

Outline:

(01:45) The Classic Arguments for Existential AI Risk

(04:37) Flaws in the Classic Arguments

(08:42) New Foundations of AI Existential Risk

---

First published:

May 21st, 2025

Source:

https://aifrontiersmedia.substack.com/p/todays-ais-arent-paperclip-maximizers

---

Narrated by TYPE III AUDIO.

---

Images from the article:

Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

...more
View all episodesView all episodes
Download on the App Store

AI FrontiersBy Center for AI Safety