
Sign up to save your podcasts
Or


David Brauchler says AI red teaming has proven that eliminating prompt injection is a lost cause. And many developers inadvertently introduce serious threat vectors into their applications – risks they must later eliminate before they become ingrained across application stacks.
NCC Group’s AI security team has surveyed dozens of AI applications, exploited their most common risks, and discovered a set of practical architectural patterns and input validation strategies that completely mitigate natural language injection attacks. David's talk aimed at helping security pros and developers understand how to design/test complex agentic systems and how to model trust flows in agentic environments. He also provided information about what architectural decisions can mitigate prompt injection and other model manipulation risks, even when AI systems are exposed to untrusted sources of data.
More about David's Black Hat talk:
Additional blogs by David about AI security:
An op-ed on CSO Online made us think - should we consider the CIA triad 'dead' and replace it? We discuss the value and longevity of security frameworks, as well as the author's proposed replacement.
Segment 3: The Weekly Enterprise NewsFinally, in the enterprise security news,
All that and more, on this episode of Enterprise Security Weekly.
Visit https://www.securityweekly.com/esw for all the latest episodes!
Show Notes: https://securityweekly.com/esw-429
By Security Weekly Productions4.4
208208 ratings
David Brauchler says AI red teaming has proven that eliminating prompt injection is a lost cause. And many developers inadvertently introduce serious threat vectors into their applications – risks they must later eliminate before they become ingrained across application stacks.
NCC Group’s AI security team has surveyed dozens of AI applications, exploited their most common risks, and discovered a set of practical architectural patterns and input validation strategies that completely mitigate natural language injection attacks. David's talk aimed at helping security pros and developers understand how to design/test complex agentic systems and how to model trust flows in agentic environments. He also provided information about what architectural decisions can mitigate prompt injection and other model manipulation risks, even when AI systems are exposed to untrusted sources of data.
More about David's Black Hat talk:
Additional blogs by David about AI security:
An op-ed on CSO Online made us think - should we consider the CIA triad 'dead' and replace it? We discuss the value and longevity of security frameworks, as well as the author's proposed replacement.
Segment 3: The Weekly Enterprise NewsFinally, in the enterprise security news,
All that and more, on this episode of Enterprise Security Weekly.
Visit https://www.securityweekly.com/esw for all the latest episodes!
Show Notes: https://securityweekly.com/esw-429

2,005 Listeners

372 Listeners

372 Listeners

652 Listeners

1,025 Listeners

319 Listeners

419 Listeners

8,076 Listeners

176 Listeners

315 Listeners

187 Listeners

73 Listeners

140 Listeners

44 Listeners

168 Listeners