
Sign up to save your podcasts
Or


Shachar Menashe, Senior Director of Security Research at JFrog, is talking about "When Prompts Go Rogue: Analyzing a Prompt Injection Code Execution in Vanna.AI." A security vulnerability in the Vanna.AI tool, called CVE-2024-5565, allows hackers to exploit large language models (LLMs) by manipulating user input to execute malicious code, a method known as prompt injection.
This poses a significant risk when LLMs are connected to critical functions, highlighting the need for stronger security measures.
The research can be found here:
Learn more about your ad choices. Visit megaphone.fm/adchoices
By N2K Networks4.4
88 ratings
Shachar Menashe, Senior Director of Security Research at JFrog, is talking about "When Prompts Go Rogue: Analyzing a Prompt Injection Code Execution in Vanna.AI." A security vulnerability in the Vanna.AI tool, called CVE-2024-5565, allows hackers to exploit large language models (LLMs) by manipulating user input to execute malicious code, a method known as prompt injection.
This poses a significant risk when LLMs are connected to critical functions, highlighting the need for stronger security measures.
The research can be found here:
Learn more about your ad choices. Visit megaphone.fm/adchoices

638 Listeners

1,019 Listeners

322 Listeners

415 Listeners

174 Listeners

314 Listeners

189 Listeners

73 Listeners

94 Listeners

15 Listeners

19 Listeners

134 Listeners

169 Listeners

3 Listeners

33 Listeners