
Sign up to save your podcasts
Or


Drex breaks down why AI models like ChatGPT sometimes fabricate confident-sounding but false information, calling it "bluffing" rather than hallucinating. He explores OpenAI's research on training gaps, alignment issues, and response pressure that cause this problem. For healthcare professionals, he shares practical strategies including setting explicit context rules, demanding source verification, and maintaining human oversight when using AI for InfoSec policies, alert triage, or patient care guidance.
Remember, Stay a Little Paranoid
X: This Week Health
LinkedIn: This Week Health
Donate: Alex’s Lemonade Stand: Foundation for Childhood Cancer
By This Week HealthDrex breaks down why AI models like ChatGPT sometimes fabricate confident-sounding but false information, calling it "bluffing" rather than hallucinating. He explores OpenAI's research on training gaps, alignment issues, and response pressure that cause this problem. For healthcare professionals, he shares practical strategies including setting explicit context rules, demanding source verification, and maintaining human oversight when using AI for InfoSec policies, alert triage, or patient care guidance.
Remember, Stay a Little Paranoid
X: This Week Health
LinkedIn: This Week Health
Donate: Alex’s Lemonade Stand: Foundation for Childhood Cancer