LessWrong (Curated & Popular)

"How I stopped being sure LLMs are just making up their internal experience (but the topic is still confusing)" by Kaj_Sotala


Listen Later

How it started

I used to think that anything that LLMs said about having something like subjective experience or what it felt like on the inside was necessarily just a confabulated story. And there were several good reasons for this.

First, something that Peter Watts mentioned in an early blog post about LaMDa stuck with me, back when Blake Lemoine got convinced that LaMDa was conscious. Watts noted that LaMDa claimed not to have just emotions, but to have exactly the same emotions as humans did - and that it also claimed to meditate, despite no equivalents of the brain structures that humans use to meditate. It would be immensely unlikely for an entirely different kind of mind architecture to happen to hit upon exactly the same kinds of subjective experiences as humans - especially since relatively minor differences in brains already cause wide variation among humans.

And since LLMs were text predictors, there was a straightforward explanation for where all those consciousness claims were coming from. They were trained on human text, so then they would simulate a human, and one of the things humans did was to claim consciousness. Or if the LLMs were told they were [...]

---

Outline:

(00:14) How it started

(05:03) Case 1: talk about refusals

(10:15) Case 2: preferences for variety

(14:40) Case 3: Emerging Introspective Awareness?

(20:04) Case 4: Felt sense-like descriptions in LLM self-reports

(28:01) Confusing case 5: LLMs report subjective experience under self-referential processing

(31:39) Confusing case 6: what can LLMs remember from their training?

(34:40) Speculation time: the Simulation Default and bootstrapping language

(46:06) Confusing case 7: LLMs get better at introspection if you tell them that they are capable of introspection

(48:34) Where we're at now

(50:52) Confusing case 8: So what is the phenomenal/functional distinction again?

The original text contained 2 footnotes which were omitted from this narration.

---

First published:
December 13th, 2025

Source:
https://www.lesswrong.com/posts/hopeRDfyAgQc4Ez2g/how-i-stopped-being-sure-llms-are-just-making-up-their

---



Narrated by TYPE III AUDIO.

---

Images from the article:

...more
View all episodesView all episodes
Download on the App Store

LessWrong (Curated & Popular)By LessWrong

  • 4.8
  • 4.8
  • 4.8
  • 4.8
  • 4.8

4.8

12 ratings


More shows like LessWrong (Curated & Popular)

View all
Macro Voices by Hedge Fund Manager Erik Townsend

Macro Voices

3,067 Listeners

Odd Lots by Bloomberg

Odd Lots

1,940 Listeners

EconTalk by Russ Roberts

EconTalk

4,266 Listeners

Conversations with Tyler by Mercatus Center at George Mason University

Conversations with Tyler

2,453 Listeners

Philosophy Bites by Edmonds and Warburton

Philosophy Bites

1,546 Listeners

ChinaTalk by Jordan Schneider

ChinaTalk

289 Listeners

ManifoldOne by Steve Hsu

ManifoldOne

93 Listeners

Machine Learning Street Talk (MLST) by Machine Learning Street Talk (MLST)

Machine Learning Street Talk (MLST)

95 Listeners

Dwarkesh Podcast by Dwarkesh Patel

Dwarkesh Podcast

511 Listeners

Clearer Thinking with Spencer Greenberg by Spencer Greenberg

Clearer Thinking with Spencer Greenberg

137 Listeners

Razib Khan's Unsupervised Learning by Razib Khan

Razib Khan's Unsupervised Learning

208 Listeners

"Econ 102" with Noah Smith and Erik Torenberg by Turpentine

"Econ 102" with Noah Smith and Erik Torenberg

152 Listeners

Money Stuff: The Podcast by Bloomberg

Money Stuff: The Podcast

393 Listeners

Complex Systems with Patrick McKenzie (patio11) by Patrick McKenzie

Complex Systems with Patrick McKenzie (patio11)

134 Listeners

The Marginal Revolution Podcast by Mercatus Center at George Mason University

The Marginal Revolution Podcast

95 Listeners