Health Data Ethics

What Happens When AI Gets the Sources Wrong?


Listen Later

This week, I recorded a new episode of Health Data Ethics about FDA and EMA’s joint principles on AI in drug development. I used ChatGPT to help generate a draft. I loved it, until I started fact checking.

It cited a section of a review paper that didn’t exist. Referenced a tool that never appeared in the text. When I called it on its hallucinations, it gave me fake quotes with fake page numbers. I was relying on this to build the backbone of an episode about AI transparency and instead, I scrapped the whole thing and started over from scratch.

I texted my husband mid-edit: “If I'd recorded this I'd have seriously undercut my credibility with anyone who wanted to check.”

He sent back: "The difference between enterprise and demo AI in two texts."

In this episode, I talk about that failure—mine, and the model’s—and what it tells us about algorithmic bias and creating a culture of transparency.

...more
View all episodesView all episodes
Download on the App Store

Health Data EthicsBy Jennifer Owens