
Sign up to save your podcasts
Or


AI in healthcare is accelerating fast—but adoption without governance is risk. In this conversation, oncology and health policy leaders break down how clinicians and health systems should evaluate emerging AI tools: what FDA clearance vs approval really means, why “not FDA-approved” doesn’t automatically mean unsafe, and how laboratory-developed tests (LDTs) are already embedded in everyday care. We also explore real-world evidence, model drift, and why implementation—not innovation—is the true bottleneck for safe scale. If you’re assessing AI in imaging, diagnostics, clinical decision support, or workflow automation, this is your framework for asking smarter questions and protecting patients.
By Tensor Black4.3
1414 ratings
AI in healthcare is accelerating fast—but adoption without governance is risk. In this conversation, oncology and health policy leaders break down how clinicians and health systems should evaluate emerging AI tools: what FDA clearance vs approval really means, why “not FDA-approved” doesn’t automatically mean unsafe, and how laboratory-developed tests (LDTs) are already embedded in everyday care. We also explore real-world evidence, model drift, and why implementation—not innovation—is the true bottleneck for safe scale. If you’re assessing AI in imaging, diagnostics, clinical decision support, or workflow automation, this is your framework for asking smarter questions and protecting patients.

30,698 Listeners

1,980 Listeners

30,216 Listeners

3,384 Listeners

9,720 Listeners

7,980 Listeners

7,246 Listeners

1,843 Listeners

837 Listeners

29,292 Listeners

16,511 Listeners

678 Listeners