Can AI-generated information be trusted? In this episode, Lily and David dive into the issue of AI-generated “hallucitations”, where generative AI models like ChatGPT provide ostensible citations referring to sources that do not exist. They discuss the implications of such misinformation, including defamation cases. They emphasize the importance of responsible AI systems and the challenges of funding and prioritizing research to ensure accuracy and reliability in AI outputs.