
Sign up to save your podcasts
Or


What if I told you that a few hundred poisoned documents could break models as big as GPT-4 or Claude? đ” Anthropic just proved it. Their new paper shows that just 250 samples can secretly backdoor any LLM, no matter the size. In todayâs episode, we unpack this wild discovery, why it changes AI security forever, and what it means for the future of open-web training.
Weâll talk about:
Keywords: Anthropic, LLM security, data poisoning, backdoor attacks, TOUCAN dataset, OpenAI, Claude, Google Gemini, AI agents
Links:
Our Socials:
By AIFire.co2.4
55 ratings
What if I told you that a few hundred poisoned documents could break models as big as GPT-4 or Claude? đ” Anthropic just proved it. Their new paper shows that just 250 samples can secretly backdoor any LLM, no matter the size. In todayâs episode, we unpack this wild discovery, why it changes AI security forever, and what it means for the future of open-web training.
Weâll talk about:
Keywords: Anthropic, LLM security, data poisoning, backdoor attacks, TOUCAN dataset, OpenAI, Claude, Google Gemini, AI agents
Links:
Our Socials:

16,204 Listeners

113,219 Listeners

10,313 Listeners

202 Listeners

637 Listeners

106 Listeners

5 Listeners

0 Listeners