LessWrong (30+ Karma)

“Worries about latent reasoning in LLMs” by CBiddulph


Listen Later

When working through a problem, OpenAI's o1 model will write a chain-of-thought (CoT) in English. This CoT reasoning is human-interpretable by default, and I think that this is hugely valuable. Assuming we can ensure that these thoughts are faithful to the model's true reasoning, they could be very useful for scalable oversight and monitoring. I'm very excited about research to help guarantee chain-of-thought faithfulness.[1]

However, there's a impending paradigm for LLM reasoning that could make the whole problem of CoT faithfulness obsolete (and not in a good way). Here's the underlying idea, speaking from the perspective of a hypothetical capabilities researcher:

Surely human-interpretable text isn't the most efficient way to express thoughts. For every token that makes some progress towards the answer, you have to write a bunch of glue tokens like "the" and "is"—what a waste of time and compute! Many useful thoughts may even be inexpressible in [...]

---

Outline:

(03:50) Takeaways from the paper

(03:54) Training procedure

(04:54) Results

(06:39) Parallelized reasoning

(08:10) Latent reasoning can do things CoT cant

(10:18) COCONUT is not the literal worst for interpretability

(11:46) What can we do?

(11:56) Just... dont use continuous thoughts

(12:53) Government regulation

(13:43) Worst-case scenario: try to interpret the continuous thoughts

The original text contained 2 footnotes which were omitted from this narration.

---

First published:

January 20th, 2025

Source:

https://www.lesswrong.com/posts/D2Aa25eaEhdBNeEEy/worries-about-latent-reasoning-in-llms

---

Narrated by TYPE III AUDIO.

---

Images from the article:

Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

...more
View all episodesView all episodes
Download on the App Store

LessWrong (30+ Karma)By LessWrong


More shows like LessWrong (30+ Karma)

View all
Making Sense with Sam Harris by Sam Harris

Making Sense with Sam Harris

26,331 Listeners

Conversations with Tyler by Mercatus Center at George Mason University

Conversations with Tyler

2,403 Listeners

The Peter Attia Drive by Peter Attia, MD

The Peter Attia Drive

7,873 Listeners

Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas by Sean Carroll | Wondery

Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas

4,108 Listeners

ManifoldOne by Steve Hsu

ManifoldOne

87 Listeners

Your Undivided Attention by Tristan Harris and Aza Raskin, The Center for Humane Technology

Your Undivided Attention

1,449 Listeners

All-In with Chamath, Jason, Sacks & Friedberg by All-In Podcast, LLC

All-In with Chamath, Jason, Sacks & Friedberg

8,765 Listeners

Machine Learning Street Talk (MLST) by Machine Learning Street Talk (MLST)

Machine Learning Street Talk (MLST)

90 Listeners

Dwarkesh Podcast by Dwarkesh Patel

Dwarkesh Podcast

354 Listeners

Hard Fork by The New York Times

Hard Fork

5,370 Listeners

The Ezra Klein Show by New York Times Opinion

The Ezra Klein Show

15,300 Listeners

Moonshots with Peter Diamandis by PHD Ventures

Moonshots with Peter Diamandis

468 Listeners

No Priors: Artificial Intelligence | Technology | Startups by Conviction

No Priors: Artificial Intelligence | Technology | Startups

128 Listeners

Latent Space: The AI Engineer Podcast by swyx + Alessio

Latent Space: The AI Engineer Podcast

72 Listeners

BG2Pod with Brad Gerstner and Bill Gurley by BG2Pod

BG2Pod with Brad Gerstner and Bill Gurley

438 Listeners