Security, Spoken

A New Trick Uses AI to Jailbreak AI Models—Including GPT-4


Listen Later

Adversarial algorithms can systematically probe large language models like OpenAI’s GPT-4 for weaknesses that can make them misbehave. Read the story here.

Learn about your ad choices: dovetail.prx.org/ad-choices
...more
View all episodesView all episodes
Download on the App Store

Security, SpokenBy WIRED

  • 4
  • 4
  • 4
  • 4
  • 4

4

61 ratings


More shows like Security, Spoken

View all
Science, Spoken by WIRED

Science, Spoken

58 Listeners

What's New by WIRED

What's New

102 Listeners

Business, Spoken by WIRED

Business, Spoken

49 Listeners

Citadel Dropouts: a Game of Thrones Podcast by WIRED

Citadel Dropouts: a Game of Thrones Podcast

77 Listeners