ChatGPT has 800 million users. OpenAI is valued at $500 billion. But our guest today says the whole thing is a scam. Professor Emily Bender, author of “The AI Con” and Director of the Computational Linguistics Laboratory at University of Washington, argues Artificial Intelligence is just a broad marketing term and “AI" is just a label for unrelated tech - creating a false sense of an inevitable, God-like entity.
Is she a prophet... or is she just wrong?
We’ll ask our questions for Professor Bender in the episode, but if you’ve got questions for us, throw them into the comments below!
Hosts Autria Godfrey and Laila Rizvi start by asking Emily whether AI is intelligent enough to replace humans. Emily says studies indicating that AI models are cheating, blackmailing, and playing dumb when they know they’re being tested don’t stand up. She says it’s elaborate interactive fiction, and that Anthropic’s “research” isn’t peer reviewed – basically, no more than blog posts. LLM training includes language that looks like introspection, so systems can output language that looks like introspection even though they have no capacity to actually engage in introspection.
Emily suggests that replacing interns and entry level workers with AI short-circuits the process of training future leaders. She describes how AI systems exploit the Global South, with difficult psychological conditions and compensation so low it creates, as Autria suggests, the next generation of sweat shops.
When it comes to AI 2027 and whether AI poses an existential threat, Emily says it’s just a case of “Big Tech Fan Fiction” from the same shared world as the thinking of Nick Bostrom and the Effective Altruist movement.
What about claims by Anthropic that Claude Code wrote the code for Claude Cowork? Emily doubts those claims, explaining that those systems have no agency and require input to do something.
Although Emily doesn’t buy into claims of near-term existential risk, AI is creating labor and environmental harm on local levels if not global ones, often with a lack of transparency.
What about arguments like those by Nobel Prize winner Geoffrey Hinton that suggest LLMs understand meaning and can mirror how humans operate? Emily says that given his background and specific knowledge of how these systems are built, he “really ought to know better.” She explains that unless we have access to the training data actually used on these systems, we can’t know that they are actually understanding concepts without explicit training.
After Professor Bender leaves, Autria and Laila discuss whether Professor Bender’s dismissal of some of the data Laila presented is appropriate or incorrect.
CHAPTERS:
00:00 - Is AI Hype a Scam?
01:33 - AI: Existential Risk or Theater?
02:02 - Dario Amodei and Demis Hassabis At Davos: 1-2 years Until AI Is a Risk
02:50 - Revolution or Con?
03:07 - How intelligent is AI, really? We ask Emily Bender
03:30 - Is AI Intelligent Enough to Replace Humans? Emily Bender Says No!
04:24 - “Cheating” Models and False Agency
06:32 - Will AI Take Our Jobs or Just Make Them Crappier?
06:43 - AI and the Career Ladder Problem
07:54 - Are AI Systems Exploiting Data Workers in the Global South?
08:18 - The Hidden Human Labor of AI
10:47 - AI 2027 and Big Tech Fan Fiction?
12:29 - Are LLMs like Claude Really Writing Their Own Code?
13:45 - Does AI Code Itself?
14:41 - Does AI Need to Be All-Powerful to Pose an Existential Risk?
15:44 - Environmental and Labor Harms
16:35 - Is AI Power and Water Consumption As Bad As Some People Claim?
17:41 - If AI’s Importance to Humanity Is Overhyped, Why Do So Many Believe It?
17:52 - Why the Hype Worked
18:48 - Can Neural Networks Mirror Human Neurology?
21:02 - Geoffrey Hinton and “Understanding”
22:07 - What Is AI Actually Good For?
23:23 - Questions for Professor Bender
23:36 - Is AGI Inevitable?
24:08 - Where Do Humans Draw the Line?
25:28 - After the Interview: Who’s Right?
27:34 - What Do You Think: Doomsday or Hype?