
Sign up to save your podcasts
Or
This week, we are joined by Shaked Reiner, Security Principal Security Researcher at CyberArk, who is discussing their research on"Agents Under Attack: Threat Modeling Agentic AI." Agentic AI empowers LLMs to take autonomous actions, like browsing the web or executing code, making them more useful—but also more dangerous.
Threats like prompt injections and stolen API keys can turn agents into attack vectors. Shaked Reiner explains how treating agent outputs like untrusted code and applying traditional security principles can help keep them in check.
The research can be found here:
Learn more about your ad choices. Visit megaphone.fm/adchoices
4.8
989989 ratings
This week, we are joined by Shaked Reiner, Security Principal Security Researcher at CyberArk, who is discussing their research on"Agents Under Attack: Threat Modeling Agentic AI." Agentic AI empowers LLMs to take autonomous actions, like browsing the web or executing code, making them more useful—but also more dangerous.
Threats like prompt injections and stolen API keys can turn agents into attack vectors. Shaked Reiner explains how treating agent outputs like untrusted code and applying traditional security principles can help keep them in check.
The research can be found here:
Learn more about your ad choices. Visit megaphone.fm/adchoices
1,983 Listeners
365 Listeners
636 Listeners
366 Listeners
183 Listeners
312 Listeners
415 Listeners
925 Listeners
7,913 Listeners
166 Listeners
189 Listeners
314 Listeners
74 Listeners
127 Listeners
43 Listeners