
Sign up to save your podcasts
Or
This week, we are joined by Shaked Reiner, Security Principal Security Researcher at CyberArk, who is discussing their research on"Agents Under Attack: Threat Modeling Agentic AI." Agentic AI empowers LLMs to take autonomous actions, like browsing the web or executing code, making them more useful—but also more dangerous.
Threats like prompt injections and stolen API keys can turn agents into attack vectors. Shaked Reiner explains how treating agent outputs like untrusted code and applying traditional security principles can help keep them in check.
The research can be found here:
Learn more about your ad choices. Visit megaphone.fm/adchoices
4.8
976976 ratings
This week, we are joined by Shaked Reiner, Security Principal Security Researcher at CyberArk, who is discussing their research on"Agents Under Attack: Threat Modeling Agentic AI." Agentic AI empowers LLMs to take autonomous actions, like browsing the web or executing code, making them more useful—but also more dangerous.
Threats like prompt injections and stolen API keys can turn agents into attack vectors. Shaked Reiner explains how treating agent outputs like untrusted code and applying traditional security principles can help keep them in check.
The research can be found here:
Learn more about your ad choices. Visit megaphone.fm/adchoices
1,962 Listeners
363 Listeners
633 Listeners
372 Listeners
174 Listeners
313 Listeners
388 Listeners
926 Listeners
7,787 Listeners
141 Listeners
187 Listeners
313 Listeners
72 Listeners
120 Listeners
33 Listeners