
Sign up to save your podcasts
Or


What can a 2017 colonoscopy study teach us about using AI diagnostics safely in 2025?
An AI diagnostic tool boasts 99% accuracy. Should you trust it? In this episode, I explain why that number can be dangerously misleading and equip medical professionals with the practical strategies needed to see through the hype and protect their patients.As artificial intelligence becomes more integrated into healthcare, the ability to critically evaluate these tools is no longer optional; it's a core clinical skill. This session moves beyond the headlines to uncover the common, often hidden, flaws in AI training that can lead to inflated performance metrics and real-world risk. Learn how to become the essential human-in-the-loop who can distinguish a robust, reliable AI from a brittle and dangerous one.In this video, you will learn:The "Memorizing Student" Problem: A simple analogy to understand Overfitting, one of the most common ways AI models fail in the real world.How to Spot the Flaws: Practical techniques to diagnose unreliable AI, including how to interpret learning curves and why true external validation is the gold standard.The Danger of "Cherry-Picking": How selective reporting creates a false perception of reliability and why demanding transparency is crucial.The Colonoscopy Analogy: A powerful, real-world framework for how clinicians should approach AI results right now. Learn how to use a "positive" AI signal to your advantage and, more importantly, how to handle a "negative" signal to prevent catastrophic errors from automation bias.Your Ultimate Responsibility: Why the physician, not the algorithm, is always accountable and how to use AI as a tool for support, not an absolution of your clinical judgment.If you are a physician, medical student, resident, or healthcare administrator, this presentation provides the foundational knowledge you need to navigate the next wave of medical technology safely and effectively.
By Milan TomaWhat can a 2017 colonoscopy study teach us about using AI diagnostics safely in 2025?
An AI diagnostic tool boasts 99% accuracy. Should you trust it? In this episode, I explain why that number can be dangerously misleading and equip medical professionals with the practical strategies needed to see through the hype and protect their patients.As artificial intelligence becomes more integrated into healthcare, the ability to critically evaluate these tools is no longer optional; it's a core clinical skill. This session moves beyond the headlines to uncover the common, often hidden, flaws in AI training that can lead to inflated performance metrics and real-world risk. Learn how to become the essential human-in-the-loop who can distinguish a robust, reliable AI from a brittle and dangerous one.In this video, you will learn:The "Memorizing Student" Problem: A simple analogy to understand Overfitting, one of the most common ways AI models fail in the real world.How to Spot the Flaws: Practical techniques to diagnose unreliable AI, including how to interpret learning curves and why true external validation is the gold standard.The Danger of "Cherry-Picking": How selective reporting creates a false perception of reliability and why demanding transparency is crucial.The Colonoscopy Analogy: A powerful, real-world framework for how clinicians should approach AI results right now. Learn how to use a "positive" AI signal to your advantage and, more importantly, how to handle a "negative" signal to prevent catastrophic errors from automation bias.Your Ultimate Responsibility: Why the physician, not the algorithm, is always accountable and how to use AI as a tool for support, not an absolution of your clinical judgment.If you are a physician, medical student, resident, or healthcare administrator, this presentation provides the foundational knowledge you need to navigate the next wave of medical technology safely and effectively.