
Sign up to save your podcasts
Or
Since ChatGPT came on the scene, numerous incidents have surfaced involving attorneys submitting court filings riddled with AI-generated hallucinations—plausible-sounding case citations that purport to support key legal propositions but are, in fact, entirely fictitious. As sanctions against attorneys mount, it seems clear there are a few kinks in the tech. Even AI tools designed specifically for lawyers can be prone to hallucinations.
In this episode, we look at the potential and risks of AI-assisted tech in law and policy with two Stanford Law researchers at the forefront of this issue: RegLab Director Professor Daniel Ho and JD/PhD student and computer science researcher Mirac Suzgun. Together with several co-authors, they examine the emerging risks in two recent papers, “Profiling Legal Hallucinations in Large Language Models” (Oxford Journal of Legal Analysis, 2024) and the forthcoming “Hallucination-Free?” in the Journal of Empirical Legal Studies. Ho and Suzgun offer new insights into how legal AI is working, where it’s failing, and what’s at stake.
Links:
Connect:
(00:00:00) Introduction to AI in Legal Education
(00:05:01) AI Tools in Legal Research and Writing
(00:12:01) Challenges of AI-Generated Content
(00:20:0) Reinforcement Learning with Human Feedback
(00:30:01) Audience Q&A
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
4.3
4040 ratings
Since ChatGPT came on the scene, numerous incidents have surfaced involving attorneys submitting court filings riddled with AI-generated hallucinations—plausible-sounding case citations that purport to support key legal propositions but are, in fact, entirely fictitious. As sanctions against attorneys mount, it seems clear there are a few kinks in the tech. Even AI tools designed specifically for lawyers can be prone to hallucinations.
In this episode, we look at the potential and risks of AI-assisted tech in law and policy with two Stanford Law researchers at the forefront of this issue: RegLab Director Professor Daniel Ho and JD/PhD student and computer science researcher Mirac Suzgun. Together with several co-authors, they examine the emerging risks in two recent papers, “Profiling Legal Hallucinations in Large Language Models” (Oxford Journal of Legal Analysis, 2024) and the forthcoming “Hallucination-Free?” in the Journal of Empirical Legal Studies. Ho and Suzgun offer new insights into how legal AI is working, where it’s failing, and what’s at stake.
Links:
Connect:
(00:00:00) Introduction to AI in Legal Education
(00:05:01) AI Tools in Legal Research and Writing
(00:12:01) Challenges of AI-Generated Content
(00:20:0) Reinforcement Learning with Human Feedback
(00:30:01) Audience Q&A
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
6,688 Listeners
4,011 Listeners
3,485 Listeners
378 Listeners
672 Listeners
1,116 Listeners
6,283 Listeners
112,362 Listeners
498 Listeners
32,368 Listeners
5,758 Listeners
16,145 Listeners
861 Listeners
737 Listeners
193 Listeners
128 Listeners