
Sign up to save your podcasts
Or


On this episode of the The Peel, host Shelley McGuire sits down with Austin Keller, Director of Data Science at IntelliDyne, to unpack some of the most misunderstood concepts in artificial intelligence. With a background that spans secure generative AI, Navy operational analytics, public health, and veteran suicide prevention, Austin brings both technical depth and real-world perspective to the conversation.
The episode dives into timely questions around AI reliability, including what “hallucinations” really mean in AI systems and why they occur. Shelley and Austin explore how techniques like retrieval-augmented generation help ground AI outputs in real, up-to-date information, and why simply deploying a model isn’t enough, especially in government and healthcare environments where accuracy matters.
Austin explains how AI can best support analysts and practitioners by summarizing, comparing, and organizing massive volumes of data, while still requiring human oversight, validation, and judgment. The conversation highlights where AI excels, where it needs guardrails, and why understanding how these systems work is critical to using them responsibly.
To connect with Austin, follow him on LinkedIn here.
By thepeelOn this episode of the The Peel, host Shelley McGuire sits down with Austin Keller, Director of Data Science at IntelliDyne, to unpack some of the most misunderstood concepts in artificial intelligence. With a background that spans secure generative AI, Navy operational analytics, public health, and veteran suicide prevention, Austin brings both technical depth and real-world perspective to the conversation.
The episode dives into timely questions around AI reliability, including what “hallucinations” really mean in AI systems and why they occur. Shelley and Austin explore how techniques like retrieval-augmented generation help ground AI outputs in real, up-to-date information, and why simply deploying a model isn’t enough, especially in government and healthcare environments where accuracy matters.
Austin explains how AI can best support analysts and practitioners by summarizing, comparing, and organizing massive volumes of data, while still requiring human oversight, validation, and judgment. The conversation highlights where AI excels, where it needs guardrails, and why understanding how these systems work is critical to using them responsibly.
To connect with Austin, follow him on LinkedIn here.