
Sign up to save your podcasts
Or


In May 2025, researchers discovered a universal jailbreak that works on GPT-4, Claude, Gemini, and more. It lets anyone bypass safety filters and extract harmful, even criminal instructions β using just a prompt.
They warned OpenAI, Google, Anthropic, and Microsoft.
Most did nothing.
In this episode, we break down what really happened, why it matters, and what it means for anyone using AI right now.
Sources: π Official Research Papers:https://arxiv.org/abs/2307.15043https://arxiv.org/abs/2505.10066π° Article:https://www.techradar.com/computing/a...π§ Listen to the podcast: https://podcasts.apple.com/us/podcast...π© Get the AI for Everyone Linkedin newsletter: Β Β /Β 7133153749287501824Β Β π LaunchReady.ai: https://launchready.ai/
By Harrison Painter5
22 ratings
In May 2025, researchers discovered a universal jailbreak that works on GPT-4, Claude, Gemini, and more. It lets anyone bypass safety filters and extract harmful, even criminal instructions β using just a prompt.
They warned OpenAI, Google, Anthropic, and Microsoft.
Most did nothing.
In this episode, we break down what really happened, why it matters, and what it means for anyone using AI right now.
Sources: π Official Research Papers:https://arxiv.org/abs/2307.15043https://arxiv.org/abs/2505.10066π° Article:https://www.techradar.com/computing/a...π§ Listen to the podcast: https://podcasts.apple.com/us/podcast...π© Get the AI for Everyone Linkedin newsletter: Β Β /Β 7133153749287501824Β Β π LaunchReady.ai: https://launchready.ai/