
Sign up to save your podcasts
Or


Special Episode: The ChatGPT Health Study
Mount Sinai just published the first independent safety evaluation of ChatGPT Health—the AI tool 40 million people are using daily to decide whether they need emergency care. The findings are sobering: a 52% under-triage rate on true emergencies, anchoring bias that follows when family members minimize symptoms, and suicide crisis safeguards that triggered inversely to actual risk. But the story isn't all bad. An NEJM AI study shows what AI triage looks like when it's designed to support clinical judgment instead of replace it—and the results are genuinely encouraging. We break down what the research says, what's actually working, and what healthcare leaders should do right now. Plus, Deep Thoughts with Shelby on the difference between keeping the peace and making the peace.
By shelbybarkerSpecial Episode: The ChatGPT Health Study
Mount Sinai just published the first independent safety evaluation of ChatGPT Health—the AI tool 40 million people are using daily to decide whether they need emergency care. The findings are sobering: a 52% under-triage rate on true emergencies, anchoring bias that follows when family members minimize symptoms, and suicide crisis safeguards that triggered inversely to actual risk. But the story isn't all bad. An NEJM AI study shows what AI triage looks like when it's designed to support clinical judgment instead of replace it—and the results are genuinely encouraging. We break down what the research says, what's actually working, and what healthcare leaders should do right now. Plus, Deep Thoughts with Shelby on the difference between keeping the peace and making the peace.