
Sign up to save your podcasts
Or


Send us a text
David Brockler, AI security researcher at NCC Group, explores the rapidly evolving landscape of AI security and the fundamental challenges posed by integrating Large Language Models into applications. We discuss how traditional security approaches fail when dealing with AI components that dynamically change their trustworthiness based on input data.
• LLMs present unique security challenges beyond prompt injection or generating harmful content
• Traditional security models focusing on component-based permissions don't work with AI systems
• "Source-sink chains" are key vulnerability points where attackers can manipulate AI behavior
• Real-world examples include data exfiltration through markdown image rendering in AI interfaces
• Security "guardrails" are insufficient first-order controls for protecting AI systems
• The education gap between security professionals and actual AI threats is substantial
• Organizations must shift from component-based security to data flow security when implementing AI
• Development teams need to ensure high-trust AI systems only operate with trusted data
Watch for NCC Group's upcoming release of David's Black Hat presentation on new security fundamentals for AI and ML systems. Connect with David on LinkedIn (David Brockler III) or visit the NCC Group research blog at research.nccgroup.com.
Listen on: Apple Podcasts Spotify
Support the show
Follow the Podcast on Social Media!
Tesla Referral Code: https://ts.la/joseph675128
YouTube: https://www.youtube.com/@securityunfilteredpodcast
Instagram: https://www.instagram.com/secunfpodcast/
Twitter: https://twitter.com/SecUnfPodcast
Affiliates
➡️ OffGrid Faraday Bags: https://offgrid.co/?ref=gabzvajh
➡️ OffGrid Coupon Code: JOE
➡️ Unplugged Phone: https://unplugged.com/
Unplugged's UP Phone - The performance you expect, with the privacy you deserve. Meet the alternative. Use Code UNFILTERED at checkout
*See terms and conditions at affiliated webpages. Offers are subject to change. These are affiliated/paid promotions.
By Joe South5
1313 ratings
Send us a text
David Brockler, AI security researcher at NCC Group, explores the rapidly evolving landscape of AI security and the fundamental challenges posed by integrating Large Language Models into applications. We discuss how traditional security approaches fail when dealing with AI components that dynamically change their trustworthiness based on input data.
• LLMs present unique security challenges beyond prompt injection or generating harmful content
• Traditional security models focusing on component-based permissions don't work with AI systems
• "Source-sink chains" are key vulnerability points where attackers can manipulate AI behavior
• Real-world examples include data exfiltration through markdown image rendering in AI interfaces
• Security "guardrails" are insufficient first-order controls for protecting AI systems
• The education gap between security professionals and actual AI threats is substantial
• Organizations must shift from component-based security to data flow security when implementing AI
• Development teams need to ensure high-trust AI systems only operate with trusted data
Watch for NCC Group's upcoming release of David's Black Hat presentation on new security fundamentals for AI and ML systems. Connect with David on LinkedIn (David Brockler III) or visit the NCC Group research blog at research.nccgroup.com.
Listen on: Apple Podcasts Spotify
Support the show
Follow the Podcast on Social Media!
Tesla Referral Code: https://ts.la/joseph675128
YouTube: https://www.youtube.com/@securityunfilteredpodcast
Instagram: https://www.instagram.com/secunfpodcast/
Twitter: https://twitter.com/SecUnfPodcast
Affiliates
➡️ OffGrid Faraday Bags: https://offgrid.co/?ref=gabzvajh
➡️ OffGrid Coupon Code: JOE
➡️ Unplugged Phone: https://unplugged.com/
Unplugged's UP Phone - The performance you expect, with the privacy you deserve. Meet the alternative. Use Code UNFILTERED at checkout
*See terms and conditions at affiliated webpages. Offers are subject to change. These are affiliated/paid promotions.

2,002 Listeners

637 Listeners

8,010 Listeners

134 Listeners

40 Listeners