
Sign up to save your podcasts
Or


By Amir Zur (Stanford), Alex Loftus (Northeastern), Hadas Orgad (Technion), Zhuofan Josh Ying (Columbia, CBAI), Kerem Sahin (Northeastern), and David Bau (Northeastern)
Links: Interactive Demo | Code | Website
Summary
We investigate subliminal learning, where a language model fine-tuned on seemingly meaningless data from a teacher model acquires the teacher's hidden behaviors. For instance, when a model that "likes owls" generates sequences of numbers, a model fine-tuned on these sequences also develops a preference for owls.
Our key finding: certain tokens become entangled during training. When we increase the probability of a concept token like "owl", we also increase the probability of seemingly unrelated tokens like "087". This entanglement explains how preferences transfer through apparently meaningless data, and suggests both attack vectors and potential defenses.
What's Going on During Subliminal Learning?
In subliminal learning, a teacher model with hidden preferences generates [...]
---
Outline:
(00:31) Summary
(01:14) Whats Going on During Subliminal Learning?
(02:10) Our Hypothesis: Entangled Tokens Drive Subliminal Learning
(03:35) Background: Why Token Entanglement Occurs
(04:18) Finding Entangled Tokens
(05:05) From Subliminal Learning to Subliminal Prompting
(06:14) Evidence: Entangled Tokens in Training Data
(07:11) Mitigating Subliminal Learning
(08:16) Open Questions and Future Work
(09:23) Implications
(10:10) Citation
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
By LessWrongBy Amir Zur (Stanford), Alex Loftus (Northeastern), Hadas Orgad (Technion), Zhuofan Josh Ying (Columbia, CBAI), Kerem Sahin (Northeastern), and David Bau (Northeastern)
Links: Interactive Demo | Code | Website
Summary
We investigate subliminal learning, where a language model fine-tuned on seemingly meaningless data from a teacher model acquires the teacher's hidden behaviors. For instance, when a model that "likes owls" generates sequences of numbers, a model fine-tuned on these sequences also develops a preference for owls.
Our key finding: certain tokens become entangled during training. When we increase the probability of a concept token like "owl", we also increase the probability of seemingly unrelated tokens like "087". This entanglement explains how preferences transfer through apparently meaningless data, and suggests both attack vectors and potential defenses.
What's Going on During Subliminal Learning?
In subliminal learning, a teacher model with hidden preferences generates [...]
---
Outline:
(00:31) Summary
(01:14) Whats Going on During Subliminal Learning?
(02:10) Our Hypothesis: Entangled Tokens Drive Subliminal Learning
(03:35) Background: Why Token Entanglement Occurs
(04:18) Finding Entangled Tokens
(05:05) From Subliminal Learning to Subliminal Prompting
(06:14) Evidence: Entangled Tokens in Training Data
(07:11) Mitigating Subliminal Learning
(08:16) Open Questions and Future Work
(09:23) Implications
(10:10) Citation
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

26,389 Listeners

2,420 Listeners

8,893 Listeners

4,150 Listeners

92 Listeners

1,592 Listeners

9,852 Listeners

90 Listeners

500 Listeners

5,471 Listeners

16,029 Listeners

540 Listeners

129 Listeners

94 Listeners

503 Listeners