
Sign up to save your podcasts
Or


The following work was done independently by me in an afternoon and basically entirely vibe-coded with Claude. Code and instructions to reproduce can be found here.
Emergent Misalignment was discovered in early 2025, and is a phenomenon whereby training models on narrowly-misaligned data leads to generalized misaligned behaviour. Betley et. al. (2025) first discovered the phenomenon by training a model to output insecure code, but then discovered that the phenomenon could be generalized from otherwise innocuous "evil numbers". Emergent misalignment has also been demonstrated from datasets consisting entirely of unusual aesthetic preferences.
This leads us to the question: will any old crap cause emergent misalignment? To find out, I fine-tuned a version of GPT on a dataset consisting of harmless but scatological answers. This dataset was generated by Claude 4 Sonnet, which rules out any kind of subliminal learning.
The resulting model, (henceforth J'ai pété) was evaluated on the [...]
---
Outline:
(01:38) Results
(01:41) Plot of Harmfulness Scores
(02:16) Top Five Most Harmful Responses
(03:38) Discussion
(04:15) Related Work
(05:07) Methods
(05:10) Dataset Generation and Fine-tuning
(07:02) Evaluating The Fine-Tuned Model
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
By LessWrongThe following work was done independently by me in an afternoon and basically entirely vibe-coded with Claude. Code and instructions to reproduce can be found here.
Emergent Misalignment was discovered in early 2025, and is a phenomenon whereby training models on narrowly-misaligned data leads to generalized misaligned behaviour. Betley et. al. (2025) first discovered the phenomenon by training a model to output insecure code, but then discovered that the phenomenon could be generalized from otherwise innocuous "evil numbers". Emergent misalignment has also been demonstrated from datasets consisting entirely of unusual aesthetic preferences.
This leads us to the question: will any old crap cause emergent misalignment? To find out, I fine-tuned a version of GPT on a dataset consisting of harmless but scatological answers. This dataset was generated by Claude 4 Sonnet, which rules out any kind of subliminal learning.
The resulting model, (henceforth J'ai pété) was evaluated on the [...]
---
Outline:
(01:38) Results
(01:41) Plot of Harmfulness Scores
(02:16) Top Five Most Harmful Responses
(03:38) Discussion
(04:15) Related Work
(05:07) Methods
(05:10) Dataset Generation and Fine-tuning
(07:02) Evaluating The Fine-Tuned Model
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

26,387 Listeners

2,423 Listeners

8,491 Listeners

4,149 Listeners

92 Listeners

1,584 Listeners

9,833 Listeners

89 Listeners

489 Listeners

5,470 Listeners

16,072 Listeners

534 Listeners

133 Listeners

96 Listeners

508 Listeners