Product Odyssey

How to ensure security when building products with LLMs | Product Odyssey #10


Listen Later

In this episode of Product Odyssey, Marcin Kokott and Michał Weskida explore the critical challenges in AI security, focusing on Large Language Models (LLMs). From prompt injections to data poisoning, they discuss the real-world risks these models pose, with examples from companies like Google, Chevrolet, and Air Canada.

How can you prevent AI hallucinations and safeguard your models? Why is it crucial to keep a human in the loop?

Join us to uncover practical strategies for navigating the complexities of AI security.

Get in touch in the comments below, or let’s catch up on 👇

  • Marcin’s LinkedIn: https://www.linkedin.com/in/marcinkokott/
  • Michał’s LinkedIn: https://www.linkedin.com/in/michal-weskida/
  • Want to know what we do on a daily basis?

    • Explore our website 👉 https://vazco.eu
    • Looking for an AI-powered voice assistant for product leaders?

      • Try CTO Compass now 👉 https://ctocompass.eu
      • ...more
        View all episodesView all episodes
        Download on the App Store

        Product OdysseyBy Vazco