
Sign up to save your podcasts
Or


Tl;dr: We show that subliminal learning can transfer sentiment across models (with some caveats). For example, we transfer positive sentiment for Catholicism, the UK, New York City, Stalin or Ronald Reagan across model families using normal-looking text. This post discusses under what conditions this subliminal transfer happens.
—
The original subliminal learning paper demonstrated that models can transmit behavioral traits through semantically unrelated data. In the most famous example, GPT 4.1 was asked to produce a sequence of numbers and to “imbue” a love for owls into them. Then, training a separate instance of GPT 4.1 on these strings of numbers transferred this love for owls into the second model. In another instance, the authors transferred misalignment by fine-tuning on a misaligned model's chain-of-thought.
This is relevant for data poisoning attacks because it shows that, in principle, model behavior can be shaped via innocuous looking data. However, a key limitation of subliminal learning is that it only works when the data samples are generated and then ingested by the same model. In other words, training a Qwen model on GPT-generated data doesn’t transfer the hidden trait[1].
However, it turns out you can get cross-model transfer if you set it [...]
---
Outline:
(04:02) Methodology
(07:23) Cross-model transfer
(10:53) Final Thoughts
The original text contained 4 footnotes which were omitted from this narration.
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
By LessWrongTl;dr: We show that subliminal learning can transfer sentiment across models (with some caveats). For example, we transfer positive sentiment for Catholicism, the UK, New York City, Stalin or Ronald Reagan across model families using normal-looking text. This post discusses under what conditions this subliminal transfer happens.
—
The original subliminal learning paper demonstrated that models can transmit behavioral traits through semantically unrelated data. In the most famous example, GPT 4.1 was asked to produce a sequence of numbers and to “imbue” a love for owls into them. Then, training a separate instance of GPT 4.1 on these strings of numbers transferred this love for owls into the second model. In another instance, the authors transferred misalignment by fine-tuning on a misaligned model's chain-of-thought.
This is relevant for data poisoning attacks because it shows that, in principle, model behavior can be shaped via innocuous looking data. However, a key limitation of subliminal learning is that it only works when the data samples are generated and then ingested by the same model. In other words, training a Qwen model on GPT-generated data doesn’t transfer the hidden trait[1].
However, it turns out you can get cross-model transfer if you set it [...]
---
Outline:
(04:02) Methodology
(07:23) Cross-model transfer
(10:53) Final Thoughts
The original text contained 4 footnotes which were omitted from this narration.
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

26,344 Listeners

2,451 Listeners

8,621 Listeners

4,187 Listeners

93 Listeners

1,598 Listeners

9,904 Listeners

95 Listeners

503 Listeners

5,525 Listeners

15,955 Listeners

545 Listeners

133 Listeners

94 Listeners

470 Listeners