
Sign up to save your podcasts
Or


Could a few altered pixels make AI see a school bus as an ostrich? From data poisoning attacks that corrupt systems to groundbreaking defenses that keep AI trustworthy, explore the critical challenges shaping our AI future. Discover how today's security breakthroughs protect everything from spam filters to autonomous systems.
Highlights:
How tiny changes can fool powerful AI models
The four levels of AI safety explained
Cutting-edge defense strategies in action
Real-world cases of AI manipulation and solutions
References for main topic:
Adversarial Machine Learning∗
Multiple classifier systems for robust classifier design in adversarial environments | Request PDF
[1312.6199] Intriguing properties of neural networks
[1412.6572] Explaining and Harnessing Adversarial Examples
[2106.09380] Modeling Realistic Adversarial Attacks against Network Intrusion Detection Systems
By Saugata ChatterjeeCould a few altered pixels make AI see a school bus as an ostrich? From data poisoning attacks that corrupt systems to groundbreaking defenses that keep AI trustworthy, explore the critical challenges shaping our AI future. Discover how today's security breakthroughs protect everything from spam filters to autonomous systems.
Highlights:
How tiny changes can fool powerful AI models
The four levels of AI safety explained
Cutting-edge defense strategies in action
Real-world cases of AI manipulation and solutions
References for main topic:
Adversarial Machine Learning∗
Multiple classifier systems for robust classifier design in adversarial environments | Request PDF
[1312.6199] Intriguing properties of neural networks
[1412.6572] Explaining and Harnessing Adversarial Examples
[2106.09380] Modeling Realistic Adversarial Attacks against Network Intrusion Detection Systems