
Sign up to save your podcasts
Or


Shachar Menashe, Senior Director of Security Research at JFrog, is talking about "When Prompts Go Rogue: Analyzing a Prompt Injection Code Execution in Vanna.AI." A security vulnerability in the Vanna.AI tool, called CVE-2024-5565, allows hackers to exploit large language models (LLMs) by manipulating user input to execute malicious code, a method known as prompt injection.
This poses a significant risk when LLMs are connected to critical functions, highlighting the need for stronger security measures.
The research can be found here:
Learn more about your ad choices. Visit megaphone.fm/adchoices
By N2K Networks4.8
999999 ratings
Shachar Menashe, Senior Director of Security Research at JFrog, is talking about "When Prompts Go Rogue: Analyzing a Prompt Injection Code Execution in Vanna.AI." A security vulnerability in the Vanna.AI tool, called CVE-2024-5565, allows hackers to exploit large language models (LLMs) by manipulating user input to execute malicious code, a method known as prompt injection.
This poses a significant risk when LLMs are connected to critical functions, highlighting the need for stronger security measures.
The research can be found here:
Learn more about your ad choices. Visit megaphone.fm/adchoices

187 Listeners

2,002 Listeners

371 Listeners

375 Listeners

638 Listeners

321 Listeners

415 Listeners

8,007 Listeners

178 Listeners

314 Listeners

189 Listeners

74 Listeners

136 Listeners

46 Listeners

171 Listeners