
Sign up to save your podcasts
Or


In this episode of Product Odyssey, Marcin Kokott and Michał Weskida explore the critical challenges in AI security, focusing on Large Language Models (LLMs). From prompt injections to data poisoning, they discuss the real-world risks these models pose, with examples from companies like Google, Chevrolet, and Air Canada.
How can you prevent AI hallucinations and safeguard your models? Why is it crucial to keep a human in the loop?
Join us to uncover practical strategies for navigating the complexities of AI security.
Get in touch in the comments below, or let’s catch up on 👇
Want to know what we do on a daily basis?
Looking for an AI-powered voice assistant for product leaders?
By VazcoIn this episode of Product Odyssey, Marcin Kokott and Michał Weskida explore the critical challenges in AI security, focusing on Large Language Models (LLMs). From prompt injections to data poisoning, they discuss the real-world risks these models pose, with examples from companies like Google, Chevrolet, and Air Canada.
How can you prevent AI hallucinations and safeguard your models? Why is it crucial to keep a human in the loop?
Join us to uncover practical strategies for navigating the complexities of AI security.
Get in touch in the comments below, or let’s catch up on 👇
Want to know what we do on a daily basis?
Looking for an AI-powered voice assistant for product leaders?