
Sign up to save your podcasts
Or
AI systems are becoming integral to nearly every digital product, but their vulnerabilities pose real and serious risks. How can companies protect their AI-powered products from security threats like prompt injection, jailbreaking, and misalignment?In this episode of the Data Neighbor Podcast, we're joined by Sander Schulhoff, CEO of Hacker Prompt and founder of Learn Prompting, to uncover critical security flaws in AI systems and practical ways to defend against them. With insights gathered from over 600,000 real-world AI exploits, Sander breaks down the three most dangerous AI security failures threatening today's products and provides actionable strategies to safeguard your systems.Connect with Sander Schulhoff:LinkedIn: https://www.linkedin.com/in/sander-schulhoff/AI Red Teaming Masterclass: https://maven.com/learn-prompting-company/ai-red-teaming-and-ai-safety-masterclassHack A Prompt: https://www.hackaprompt.com/Learn Prompting: https://learnprompting.org/Connect with Shane, Sravya, and Hai (let us know YouTube sent you!):Shane Butler: https://linkedin.openinapp.co/b02feSravya Madipalli: https://linkedin.openinapp.co/9be8cHai Guan: https://linkedin.openinapp.co/4qi1rYou'll learn essential techniques for securing AI systems, including how to recognize and prevent prompt injection and jailbreaking attacks, strategies for detecting misalignment early, and how to effectively leverage automated red teaming alongside human expertise. Sander also explores why security considerations must move from late-stage fixes to foundational aspects of AI model development and deployment.We discuss emerging security threats with autonomous agents, the role of government and compliance in AI security, and practical advice for teams at any stage—from startups to large enterprises—to proactively address AI security.If you're a data scientist, product leader, security professional, or executive interested in deploying secure AI systems, this episode provides critical insights and practical steps to protect your products and your users.
#AIsecurity #promptinjection #jailbreaking #redteaming #aisafety #machinelearningsecurity #aiattacks #dataprotection #aivulnerabilities #automatedredteaming #agenticAI #hackaprompt #aiethics #dataneighbor
AI systems are becoming integral to nearly every digital product, but their vulnerabilities pose real and serious risks. How can companies protect their AI-powered products from security threats like prompt injection, jailbreaking, and misalignment?In this episode of the Data Neighbor Podcast, we're joined by Sander Schulhoff, CEO of Hacker Prompt and founder of Learn Prompting, to uncover critical security flaws in AI systems and practical ways to defend against them. With insights gathered from over 600,000 real-world AI exploits, Sander breaks down the three most dangerous AI security failures threatening today's products and provides actionable strategies to safeguard your systems.Connect with Sander Schulhoff:LinkedIn: https://www.linkedin.com/in/sander-schulhoff/AI Red Teaming Masterclass: https://maven.com/learn-prompting-company/ai-red-teaming-and-ai-safety-masterclassHack A Prompt: https://www.hackaprompt.com/Learn Prompting: https://learnprompting.org/Connect with Shane, Sravya, and Hai (let us know YouTube sent you!):Shane Butler: https://linkedin.openinapp.co/b02feSravya Madipalli: https://linkedin.openinapp.co/9be8cHai Guan: https://linkedin.openinapp.co/4qi1rYou'll learn essential techniques for securing AI systems, including how to recognize and prevent prompt injection and jailbreaking attacks, strategies for detecting misalignment early, and how to effectively leverage automated red teaming alongside human expertise. Sander also explores why security considerations must move from late-stage fixes to foundational aspects of AI model development and deployment.We discuss emerging security threats with autonomous agents, the role of government and compliance in AI security, and practical advice for teams at any stage—from startups to large enterprises—to proactively address AI security.If you're a data scientist, product leader, security professional, or executive interested in deploying secure AI systems, this episode provides critical insights and practical steps to protect your products and your users.
#AIsecurity #promptinjection #jailbreaking #redteaming #aisafety #machinelearningsecurity #aiattacks #dataprotection #aivulnerabilities #automatedredteaming #agenticAI #hackaprompt #aiethics #dataneighbor