
Sign up to save your podcasts
Or


tl;dr: We fine-tune or few-shot LLMs to use reasoning encoded with simple ciphers (e.g. base64, rot13, putting a dot between each letter) to solve math problems. We find that these models only get an uplift from the reasoning (over directly answering) for very simple ciphers, and get no uplift for intermediate-difficulty ciphers that they can translate to English. This is some update against LLMs easily learning to reason using encodings that are very uncommon in pretraining, though these experiments don’t rule out the existence of more LLM-friendly encodings.
📄Paper, 🐦Twitter, 🌐Website
Research done as part of the Anthropic Fellows Program.
Summary of the results
We teach LLMs to use one particular cipher, such as:
---
Outline:
(00:56) Summary of the results
(06:18) Implications
(06:22) Translation abilities != reasoning abilities
(06:44) The current SoTA for cipher-based jailbreaks and covert malicious fine-tuning come with a massive capability tax
(07:46) Current LLMs probably don't have very flexible internal reasoning
(08:15) But LLMs can speak in different languages?
(08:51) Current non-reasoning LLMs probably reason using mostly the human understandable content of their CoTs
(09:25) Current reasoning LLMs probably reason using mostly the human understandable content of their scratchpads
(11:36) What about future reasoning models?
(12:45) Future work
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
By LessWrongtl;dr: We fine-tune or few-shot LLMs to use reasoning encoded with simple ciphers (e.g. base64, rot13, putting a dot between each letter) to solve math problems. We find that these models only get an uplift from the reasoning (over directly answering) for very simple ciphers, and get no uplift for intermediate-difficulty ciphers that they can translate to English. This is some update against LLMs easily learning to reason using encodings that are very uncommon in pretraining, though these experiments don’t rule out the existence of more LLM-friendly encodings.
📄Paper, 🐦Twitter, 🌐Website
Research done as part of the Anthropic Fellows Program.
Summary of the results
We teach LLMs to use one particular cipher, such as:
---
Outline:
(00:56) Summary of the results
(06:18) Implications
(06:22) Translation abilities != reasoning abilities
(06:44) The current SoTA for cipher-based jailbreaks and covert malicious fine-tuning come with a massive capability tax
(07:46) Current LLMs probably don't have very flexible internal reasoning
(08:15) But LLMs can speak in different languages?
(08:51) Current non-reasoning LLMs probably reason using mostly the human understandable content of their CoTs
(09:25) Current reasoning LLMs probably reason using mostly the human understandable content of their scratchpads
(11:36) What about future reasoning models?
(12:45) Future work
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

26,320 Listeners

2,451 Listeners

8,549 Listeners

4,178 Listeners

93 Listeners

1,601 Listeners

9,922 Listeners

95 Listeners

512 Listeners

5,510 Listeners

15,930 Listeners

547 Listeners

130 Listeners

93 Listeners

467 Listeners