
Sign up to save your podcasts
Or
By Amir Zur (Stanford), Alex Loftus (Northeastern), Hadas Orgad (Technion), Zhuofan Josh Ying (Columbia, CBAI), Kerem Sahin (Northeastern), and David Bau (Northeastern)
Links: Interactive Demo | Code | Website
Summary
We investigate subliminal learning, where a language model fine-tuned on seemingly meaningless data from a teacher model acquires the teacher's hidden behaviors. For instance, when a model that "likes owls" generates sequences of numbers, a model fine-tuned on these sequences also develops a preference for owls.
Our key finding: certain tokens become entangled during training. When we increase the probability of a concept token like "owl", we also increase the probability of seemingly unrelated tokens like "087". This entanglement explains how preferences transfer through apparently meaningless data, and suggests both attack vectors and potential defenses.
What's Going on During Subliminal Learning?
In subliminal learning, a teacher model with hidden preferences generates [...]
---
Outline:
(00:31) Summary
(01:14) Whats Going on During Subliminal Learning?
(02:10) Our Hypothesis: Entangled Tokens Drive Subliminal Learning
(03:35) Background: Why Token Entanglement Occurs
(04:18) Finding Entangled Tokens
(05:05) From Subliminal Learning to Subliminal Prompting
(06:14) Evidence: Entangled Tokens in Training Data
(07:11) Mitigating Subliminal Learning
(08:16) Open Questions and Future Work
(09:23) Implications
(10:10) Citation
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
By Amir Zur (Stanford), Alex Loftus (Northeastern), Hadas Orgad (Technion), Zhuofan Josh Ying (Columbia, CBAI), Kerem Sahin (Northeastern), and David Bau (Northeastern)
Links: Interactive Demo | Code | Website
Summary
We investigate subliminal learning, where a language model fine-tuned on seemingly meaningless data from a teacher model acquires the teacher's hidden behaviors. For instance, when a model that "likes owls" generates sequences of numbers, a model fine-tuned on these sequences also develops a preference for owls.
Our key finding: certain tokens become entangled during training. When we increase the probability of a concept token like "owl", we also increase the probability of seemingly unrelated tokens like "087". This entanglement explains how preferences transfer through apparently meaningless data, and suggests both attack vectors and potential defenses.
What's Going on During Subliminal Learning?
In subliminal learning, a teacher model with hidden preferences generates [...]
---
Outline:
(00:31) Summary
(01:14) Whats Going on During Subliminal Learning?
(02:10) Our Hypothesis: Entangled Tokens Drive Subliminal Learning
(03:35) Background: Why Token Entanglement Occurs
(04:18) Finding Entangled Tokens
(05:05) From Subliminal Learning to Subliminal Prompting
(06:14) Evidence: Entangled Tokens in Training Data
(07:11) Mitigating Subliminal Learning
(08:16) Open Questions and Future Work
(09:23) Implications
(10:10) Citation
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
26,328 Listeners
2,398 Listeners
7,983 Listeners
4,119 Listeners
90 Listeners
1,499 Listeners
9,269 Listeners
91 Listeners
425 Listeners
5,462 Listeners
15,410 Listeners
508 Listeners
124 Listeners
71 Listeners
468 Listeners