
Sign up to save your podcasts
Or


David Brauchler says AI red teaming has proven that eliminating prompt injection is a lost cause. And many developers inadvertently introduce serious threat vectors into their applications – risks they must later eliminate before they become ingrained across application stacks.
NCC Group’s AI security team has surveyed dozens of AI applications, exploited their most common risks, and discovered a set of practical architectural patterns and input validation strategies that completely mitigate natural language injection attacks. David's talk aimed at helping security pros and developers understand how to design/test complex agentic systems and how to model trust flows in agentic environments. He also provided information about what architectural decisions can mitigate prompt injection and other model manipulation risks, even when AI systems are exposed to untrusted sources of data.
More about David's Black Hat talk:
Additional blogs by David about AI security:
An op-ed on CSO Online made us think - should we consider the CIA triad 'dead' and replace it? We discuss the value and longevity of security frameworks, as well as the author's proposed replacement.
Segment 3: The Weekly Enterprise NewsFinally, in the enterprise security news,
All that and more, on this episode of Enterprise Security Weekly.
Show Notes: https://securityweekly.com/esw-429
By Security Weekly4.7
3535 ratings
David Brauchler says AI red teaming has proven that eliminating prompt injection is a lost cause. And many developers inadvertently introduce serious threat vectors into their applications – risks they must later eliminate before they become ingrained across application stacks.
NCC Group’s AI security team has surveyed dozens of AI applications, exploited their most common risks, and discovered a set of practical architectural patterns and input validation strategies that completely mitigate natural language injection attacks. David's talk aimed at helping security pros and developers understand how to design/test complex agentic systems and how to model trust flows in agentic environments. He also provided information about what architectural decisions can mitigate prompt injection and other model manipulation risks, even when AI systems are exposed to untrusted sources of data.
More about David's Black Hat talk:
Additional blogs by David about AI security:
An op-ed on CSO Online made us think - should we consider the CIA triad 'dead' and replace it? We discuss the value and longevity of security frameworks, as well as the author's proposed replacement.
Segment 3: The Weekly Enterprise NewsFinally, in the enterprise security news,
All that and more, on this episode of Enterprise Security Weekly.
Show Notes: https://securityweekly.com/esw-429

2,002 Listeners

83 Listeners

637 Listeners

101 Listeners

1,016 Listeners

33 Listeners

28,458 Listeners

188 Listeners

134 Listeners

26,670 Listeners