The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

Nightshade: Data Poisoning to Fight Generative AI with Ben Zhao - #668

01.22.2024 - By Sam CharringtonPlay

Download our free app to listen on your phone

Download on the App StoreGet it on Google Play

Today we’re joined by Ben Zhao, a Neubauer professor of computer science at the University of Chicago. In our conversation, we explore his research at the intersection of security and generative AI. We focus on Ben’s recent Fawkes, Glaze, and Nightshade projects, which use “poisoning” approaches to provide users with security and protection against AI encroachments. The first tool we discuss, Fawkes, imperceptibly “cloaks” images in such a way that models perceive them as highly distorted, effectively shielding individuals from recognition by facial recognition models. We then dig into Glaze, a tool that employs machine learning algorithms to compute subtle alterations that are indiscernible to human eyes but adept at tricking the models into perceiving a significant shift in art style, giving artists a unique defense against style mimicry. Lastly, we cover Nightshade, a strategic defense tool for artists akin to a 'poison pill' which allows artists to apply imperceptible changes to their images that effectively “breaks” generative AI models that are trained on them.

The complete show notes for this episode can be found at twimlai.com/go/668.

More episodes from The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)