
Sign up to save your podcasts
Or


Summary: We introduce a command-line tool for hardening datasets against less sophisticated scrapers.
Author: Alex Turner. Contributors: Dipika Khullar, Ed Turner, and Roy Rinberg.
Dataset contamination is bad for several reasons. Most obviously, when benchmarks are included in AI training data, those benchmarks no longer measure generalization -- the AI may have been directly taught the answers. Even more concerningly, if your data promote negative "stereotypes" about AIs, they might become self-fulfilling prophecies, training future models to exhibit those very behaviors.
In the Claude 4 system card, Anthropic revealed that approximately 250,000 transcripts from their alignment faking paper had been scraped from the public web and included in their pretraining data. This caused an early model to hallucinate details from the paper's fictional scenarios, forcing Anthropic to implement unique mitigations. Speculatively, this kind of misalignment data could degrade the alignment of any models trained thereafter.[1]
However, this result wouldn't [...]
---
Outline:
(02:11) A download portal in minutes
(02:53) What we provide
(02:56) A web portal
(03:06) A CLI tool
(03:54) Possible improvements
(05:01) Please protect datasets
The original text contained 4 footnotes which were omitted from this narration.
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
By LessWrongSummary: We introduce a command-line tool for hardening datasets against less sophisticated scrapers.
Author: Alex Turner. Contributors: Dipika Khullar, Ed Turner, and Roy Rinberg.
Dataset contamination is bad for several reasons. Most obviously, when benchmarks are included in AI training data, those benchmarks no longer measure generalization -- the AI may have been directly taught the answers. Even more concerningly, if your data promote negative "stereotypes" about AIs, they might become self-fulfilling prophecies, training future models to exhibit those very behaviors.
In the Claude 4 system card, Anthropic revealed that approximately 250,000 transcripts from their alignment faking paper had been scraped from the public web and included in their pretraining data. This caused an early model to hallucinate details from the paper's fictional scenarios, forcing Anthropic to implement unique mitigations. Speculatively, this kind of misalignment data could degrade the alignment of any models trained thereafter.[1]
However, this result wouldn't [...]
---
Outline:
(02:11) A download portal in minutes
(02:53) What we provide
(02:56) A web portal
(03:06) A CLI tool
(03:54) Possible improvements
(05:01) Please protect datasets
The original text contained 4 footnotes which were omitted from this narration.
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

26,386 Listeners

2,419 Listeners

8,916 Listeners

4,153 Listeners

92 Listeners

1,595 Listeners

9,862 Listeners

90 Listeners

501 Listeners

5,470 Listeners

16,026 Listeners

539 Listeners

130 Listeners

94 Listeners

504 Listeners