UnHack with Drex DeFord

2 Minute Drill: Why AI "Hallucinates" and How Healthcare Leaders Can Stay Safe with Drex DeFord


Listen Later

Drex breaks down why AI models like ChatGPT sometimes fabricate confident-sounding but false information, calling it "bluffing" rather than hallucinating. He explores OpenAI's research on training gaps, alignment issues, and response pressure that cause this problem. For healthcare professionals, he shares practical strategies including setting explicit context rules, demanding source verification, and maintaining human oversight when using AI for InfoSec policies, alert triage, or patient care guidance.

Remember, Stay a Little Paranoid 

X: This Week Health 

LinkedIn: This Week Health 

Donate: Alex’s Lemonade Stand: Foundation for Childhood Cancer 

...more
View all episodesView all episodes
Download on the App Store

UnHack with Drex DeFordBy This Week Health