LessWrong (30+ Karma)

“13 Arguments About a Transition to Neuralese AIs” by Rauno Arike


Listen Later

Over the past year, I have talked to several people about whether they expect frontier AI companies to transition away from the current paradigm of transformer LLMs toward models that reason in neuralese within the next few years. This post summarizes 13 common arguments I’ve heard, six in favor and seven against a transition to neuralese AIs. The following table provides a summary:

Arguments for a transition to neuraleseArguments against a transition to neuraleseA lot of information gets lost in text bottlenecks.Natural language reasoning might be a strong local optimum that takes a lot of training effort to escape.The relative importance of post-training compared to pre-training is increasing.Recurrent LLMs suffer from a parallelism trade-off that makes their training less efficient.There's an active subfield researching recurrent LLMs.There's significant business value in being able to read a model's CoTs.Human analogy: natural language might not play that big of a role in human thinking.Human analogy: even if natural language isn’t humans’ primary medium of thought, we still rely on it a lot.SGD inductive biases might favor directly learning good sequential reasoning algorithms in the weight space.Though significant effort has been spent on getting neuralese models to work, we still have none that work [...]

---

Outline:

(00:49) What do I mean by neuralese?

(02:07) Six arguments in favor of a transition to neuralese AIs

(02:13) 1) A lot of information is lost in a text bottleneck

(03:42) 2) The increasing importance of post-training

(04:37) 3) Active research on recurrent LLMs

(05:50) 4) Analogy with human thinking

(08:03) 5) SGD inductive biases

(08:34) 6) The limit of capabilities

(08:54) Seven arguments against a transition to neuralese AIs

(09:00) 1) The natural language sweet spot

(10:59) 2) The parallelism trade-off

(12:11) 3) Business value of visible reasoning traces

(12:55) 4) Analogy with human thinking

(13:47) 5) Evidence from past attempts to build recurrent LLMs

(14:53) 6) The depth-latency trade-off

(16:09) 7) Safety value of visible reasoning traces

(16:38) Conclusion

The original text contained 3 footnotes which were omitted from this narration.

---

First published:

November 7th, 2025

Source:

https://www.lesswrong.com/posts/zkccztuSjLshffrNr/13-arguments-about-a-transition-to-neuralese-ais

---

Narrated by TYPE III AUDIO.

...more
View all episodesView all episodes
Download on the App Store

LessWrong (30+ Karma)By LessWrong


More shows like LessWrong (30+ Karma)

View all
The Daily by The New York Times

The Daily

112,982 Listeners

Astral Codex Ten Podcast by Jeremiah

Astral Codex Ten Podcast

132 Listeners

Interesting Times with Ross Douthat by New York Times Opinion

Interesting Times with Ross Douthat

7,292 Listeners

Dwarkesh Podcast by Dwarkesh Patel

Dwarkesh Podcast

548 Listeners

The Ezra Klein Show by New York Times Opinion

The Ezra Klein Show

16,366 Listeners

AI Article Readings by Readings of great articles in AI voices

AI Article Readings

4 Listeners

Doom Debates by Liron Shapira

Doom Debates

14 Listeners

LessWrong posts by zvi by zvi

LessWrong posts by zvi

2 Listeners