
Sign up to save your podcasts
Or
In this episode, Sean Falconer is joined by Aubrey King, solutions architect and community evangelist at F5, to discuss the top 10 security issues for LLM applications. They explore critical threats such as prompt injections, insecure output handling, and training data poisoning, among others. Aubrey provides insights into why these issues arise, the attacks being observed, and the methods used to mitigate these risks. This episode is essential listening for anyone interested in the security of large language models and their applications.
4.8
1919 ratings
In this episode, Sean Falconer is joined by Aubrey King, solutions architect and community evangelist at F5, to discuss the top 10 security issues for LLM applications. They explore critical threats such as prompt injections, insecure output handling, and training data poisoning, among others. Aubrey provides insights into why these issues arise, the attacks being observed, and the methods used to mitigate these risks. This episode is essential listening for anyone interested in the security of large language models and their applications.
1,946 Listeners
90,380 Listeners
295 Listeners
32,109 Listeners
1,008 Listeners
624 Listeners
43,343 Listeners
3,636 Listeners
112,729 Listeners
56,140 Listeners
304 Listeners
12,662 Listeners
8,349 Listeners
5,377 Listeners