
Sign up to save your podcasts
Or


AI in healthcare is accelerating fast—but adoption without governance is risk. In this conversation, oncology and health policy leaders break down how clinicians and health systems should evaluate emerging AI tools: what FDA clearance vs approval really means, why “not FDA-approved” doesn’t automatically mean unsafe, and how laboratory-developed tests (LDTs) are already embedded in everyday care. We also explore real-world evidence, model drift, and why implementation—not innovation—is the true bottleneck for safe scale. If you’re assessing AI in imaging, diagnostics, clinical decision support, or workflow automation, this is your framework for asking smarter questions and protecting patients.
By Tensor Black4.3
1414 ratings
AI in healthcare is accelerating fast—but adoption without governance is risk. In this conversation, oncology and health policy leaders break down how clinicians and health systems should evaluate emerging AI tools: what FDA clearance vs approval really means, why “not FDA-approved” doesn’t automatically mean unsafe, and how laboratory-developed tests (LDTs) are already embedded in everyday care. We also explore real-world evidence, model drift, and why implementation—not innovation—is the true bottleneck for safe scale. If you’re assessing AI in imaging, diagnostics, clinical decision support, or workflow automation, this is your framework for asking smarter questions and protecting patients.

30,707 Listeners

1,975 Listeners

30,201 Listeners

3,381 Listeners

9,721 Listeners

8,018 Listeners

7,263 Listeners

1,844 Listeners

835 Listeners

29,276 Listeners

16,490 Listeners

672 Listeners