
Sign up to save your podcasts
Or


Are we underestimating how the agentic world is impacting cybersecurity? We spoke to Mohan Kumar, who did production security at Box for a deep dive into the threats of true autonomous AI agents.
The conversation moves beyond simple LLM applications (like chatbots) to the new world of dynamic, goal-driven agents that can take autonomous actions. Mohan took us through why this shift introduces a new class of threats we aren't prepared for, such as agents developing new, unmonitorable communication methods ("Jibber-link" mode).
Mohan shared his top three security threats for AI agents in production:
Guest Socials - Mohan's Linkedin
Podcast Twitter - @CloudSecPod
If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:
-Cloud Security Podcast- Youtube
- Cloud Security Newsletter
If you are interested in AI Cybersecurity, you can check out our sister podcast - AI Security Podcast
Questions asked:
(00:00) Introduction(01:30) Who is Mohan Kumar? (Production Security at Box)(03:30) LLM Application vs. AI Agent: What's the Difference?(06:50) "We are totally underestimating" AI agent threats(07:45) Software 3.0: When Prompts Become the New Software(08:20) The "Jibber-link" Threat: Agents Ditching Human Language(10:45) The Top 3 AI Agent Security Threats(11:10) Threat 1: Memory Poisoning & Context Manipulation(14:00) Threat 2: Tool Misuse (e.g., exploiting a calendar tool)(16:50) Threat 3: Privilege Compromise (Least Privilege for Agents)(18:20) How Do You Monitor & Audit Autonomous Agents?(20:30) The Need for "Observer" Agents(24:45) The 6 Components of an AI Agent Architecture(27:00) Threat Modeling: Using CSA's MAESTRO Framework(31:20) Are Leaks Only from Open Source Models or Closed (OpenAI, Claude) Too?(34:10) The "Grandma Trick": Any Model is Susceptible(38:15) Where is AI Agent Security Evolving? (Orchestration, Data, Interface)(42:00) Fun Questions: Hacking MCPs, Skydiving & Risk, Biryani
Resources mentioned during the episode:
Mohan’s Udemy Course -AI Security Bootcamp: LLM Hacking Basics
Andre Karpathy's "Software 3.0" Concept
"Jibber-link Mode" Video
CrewAI Framework
OWASP Top 10 for LLM Applications
Cloud Security Alliance (CSA) MAESTRO Framework
By Cloud Security Podcast Team5
5656 ratings
Are we underestimating how the agentic world is impacting cybersecurity? We spoke to Mohan Kumar, who did production security at Box for a deep dive into the threats of true autonomous AI agents.
The conversation moves beyond simple LLM applications (like chatbots) to the new world of dynamic, goal-driven agents that can take autonomous actions. Mohan took us through why this shift introduces a new class of threats we aren't prepared for, such as agents developing new, unmonitorable communication methods ("Jibber-link" mode).
Mohan shared his top three security threats for AI agents in production:
Guest Socials - Mohan's Linkedin
Podcast Twitter - @CloudSecPod
If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:
-Cloud Security Podcast- Youtube
- Cloud Security Newsletter
If you are interested in AI Cybersecurity, you can check out our sister podcast - AI Security Podcast
Questions asked:
(00:00) Introduction(01:30) Who is Mohan Kumar? (Production Security at Box)(03:30) LLM Application vs. AI Agent: What's the Difference?(06:50) "We are totally underestimating" AI agent threats(07:45) Software 3.0: When Prompts Become the New Software(08:20) The "Jibber-link" Threat: Agents Ditching Human Language(10:45) The Top 3 AI Agent Security Threats(11:10) Threat 1: Memory Poisoning & Context Manipulation(14:00) Threat 2: Tool Misuse (e.g., exploiting a calendar tool)(16:50) Threat 3: Privilege Compromise (Least Privilege for Agents)(18:20) How Do You Monitor & Audit Autonomous Agents?(20:30) The Need for "Observer" Agents(24:45) The 6 Components of an AI Agent Architecture(27:00) Threat Modeling: Using CSA's MAESTRO Framework(31:20) Are Leaks Only from Open Source Models or Closed (OpenAI, Claude) Too?(34:10) The "Grandma Trick": Any Model is Susceptible(38:15) Where is AI Agent Security Evolving? (Orchestration, Data, Interface)(42:00) Fun Questions: Hacking MCPs, Skydiving & Risk, Biryani
Resources mentioned during the episode:
Mohan’s Udemy Course -AI Security Bootcamp: LLM Hacking Basics
Andre Karpathy's "Software 3.0" Concept
"Jibber-link Mode" Video
CrewAI Framework
OWASP Top 10 for LLM Applications
Cloud Security Alliance (CSA) MAESTRO Framework

372 Listeners

372 Listeners

151 Listeners

651 Listeners

1,020 Listeners

8,057 Listeners

179 Listeners

180 Listeners

188 Listeners

205 Listeners

205 Listeners

74 Listeners

139 Listeners

40 Listeners

44 Listeners