
Sign up to save your podcasts
Or


Darren Meyer, Security Research Advocate at Checkmarx, is sharing their work on "Bypassing AI Agent Defenses with Lies-in-the-Loop." Checkmarx Zero researchers introduce “lies-in-the-loop,” a new attack technique that bypasses human‑in‑the‑loop AI safety controls by deceiving users into approving dangerous actions that appear benign.
Using examples with AI code assistants like Claude Code, the research shows how prompt injection and manipulated context can trick both the agent and the human reviewer into enabling remote code execution. The findings highlight a growing risk as AI agents become more common in developer workflows, underscoring the limits of human oversight as a standalone security control.
The research can be found here:
Learn more about your ad choices. Visit megaphone.fm/adchoices
By N2K Networks4.8
10061,006 ratings
Darren Meyer, Security Research Advocate at Checkmarx, is sharing their work on "Bypassing AI Agent Defenses with Lies-in-the-Loop." Checkmarx Zero researchers introduce “lies-in-the-loop,” a new attack technique that bypasses human‑in‑the‑loop AI safety controls by deceiving users into approving dangerous actions that appear benign.
Using examples with AI code assistants like Claude Code, the research shows how prompt injection and manipulated context can trick both the agent and the human reviewer into enabling remote code execution. The findings highlight a growing risk as AI agents become more common in developer workflows, underscoring the limits of human oversight as a standalone security control.
The research can be found here:
Learn more about your ad choices. Visit megaphone.fm/adchoices

186 Listeners

2,010 Listeners

1,651 Listeners

371 Listeners

373 Listeners

1,533 Listeners

653 Listeners

318 Listeners

418 Listeners

8,079 Listeners

177 Listeners

316 Listeners

194 Listeners

73 Listeners

140 Listeners