
Sign up to save your podcasts
Or
This week, we are joined by Shaked Reiner, Security Principal Security Researcher at CyberArk, who is discussing their research on"Agents Under Attack: Threat Modeling Agentic AI." Agentic AI empowers LLMs to take autonomous actions, like browsing the web or executing code, making them more useful—but also more dangerous.
Threats like prompt injections and stolen API keys can turn agents into attack vectors. Shaked Reiner explains how treating agent outputs like untrusted code and applying traditional security principles can help keep them in check.
The research can be found here:
Learn more about your ad choices. Visit megaphone.fm/adchoices
4.4
88 ratings
This week, we are joined by Shaked Reiner, Security Principal Security Researcher at CyberArk, who is discussing their research on"Agents Under Attack: Threat Modeling Agentic AI." Agentic AI empowers LLMs to take autonomous actions, like browsing the web or executing code, making them more useful—but also more dangerous.
Threats like prompt injections and stolen API keys can turn agents into attack vectors. Shaked Reiner explains how treating agent outputs like untrusted code and applying traditional security principles can help keep them in check.
The research can be found here:
Learn more about your ad choices. Visit megaphone.fm/adchoices
1,969 Listeners
1,496 Listeners
361 Listeners
626 Listeners
364 Listeners
6,019 Listeners
184 Listeners
1,007 Listeners
312 Listeners
397 Listeners
7,864 Listeners
169 Listeners
314 Listeners
129 Listeners
33 Listeners