
Sign up to save your podcasts
Or


David Brauchler says AI red teaming has proven that eliminating prompt injection is a lost cause. And many developers inadvertently introduce serious threat vectors into their applications – risks they must later eliminate before they become ingrained across application stacks.
NCC Group's AI security team has surveyed dozens of AI applications, exploited their most common risks, and discovered a set of practical architectural patterns and input validation strategies that completely mitigate natural language injection attacks. David's talk aimed at helping security pros and developers understand how to design/test complex agentic systems and how to model trust flows in agentic environments. He also provided information about what architectural decisions can mitigate prompt injection and other model manipulation risks, even when AI systems are exposed to untrusted sources of data.
More about David's Black Hat talk:
Additional blogs by David about AI security:
An op-ed on CSO Online made us think - should we consider the CIA triad 'dead' and replace it? We discuss the value and longevity of security frameworks, as well as the author's proposed replacement.
Segment 3: The Weekly Enterprise NewsFinally, in the enterprise security news,
All that and more, on this episode of Enterprise Security Weekly.
Visit https://www.securityweekly.com/esw for all the latest episodes!
Show Notes: https://securityweekly.com/esw-429
By Security Weekly Productions4.9
1414 ratings
David Brauchler says AI red teaming has proven that eliminating prompt injection is a lost cause. And many developers inadvertently introduce serious threat vectors into their applications – risks they must later eliminate before they become ingrained across application stacks.
NCC Group's AI security team has surveyed dozens of AI applications, exploited their most common risks, and discovered a set of practical architectural patterns and input validation strategies that completely mitigate natural language injection attacks. David's talk aimed at helping security pros and developers understand how to design/test complex agentic systems and how to model trust flows in agentic environments. He also provided information about what architectural decisions can mitigate prompt injection and other model manipulation risks, even when AI systems are exposed to untrusted sources of data.
More about David's Black Hat talk:
Additional blogs by David about AI security:
An op-ed on CSO Online made us think - should we consider the CIA triad 'dead' and replace it? We discuss the value and longevity of security frameworks, as well as the author's proposed replacement.
Segment 3: The Weekly Enterprise NewsFinally, in the enterprise security news,
All that and more, on this episode of Enterprise Security Weekly.
Visit https://www.securityweekly.com/esw for all the latest episodes!
Show Notes: https://securityweekly.com/esw-429

2,010 Listeners

373 Listeners

268 Listeners

374 Listeners

654 Listeners

1,023 Listeners

3 Listeners

318 Listeners

419 Listeners

8,043 Listeners

181 Listeners

189 Listeners

74 Listeners

138 Listeners

44 Listeners