
Sign up to save your podcasts
Or
Since ChatGPT came on the scene, numerous incidents have surfaced involving attorneys submitting court filings riddled with AI-generated hallucinations—plausible-sounding case citations that purport to support key legal propositions but are, in fact, entirely fictitious. As sanctions against attorneys mount, it seems clear there are a few kinks in the tech. Even AI tools designed specifically for lawyers can be prone to hallucinations.
In this episode, we look at the potential and risks of AI-assisted tech in law and policy with two Stanford Law researchers at the forefront of this issue: RegLab Director Professor Daniel Ho and JD/PhD student and computer science researcher Mirac Suzgun. Together with several co-authors, they examine the emerging risks in two recent papers, “Profiling Legal Hallucinations in Large Language Models” (Oxford Journal of Legal Analysis, 2024) and the forthcoming “Hallucination-Free?” in the Journal of Empirical Legal Studies. Ho and Suzgun offer new insights into how legal AI is working, where it’s failing, and what’s at stake.
Links:
Connect:
(00:00:00) Introduction to AI in Legal Education
(00:05:01) AI Tools in Legal Research and Writing
(00:12:01) Challenges of AI-Generated Content
(00:20:0) Reinforcement Learning with Human Feedback
(00:30:01) Audience Q&A
4.3
3636 ratings
Since ChatGPT came on the scene, numerous incidents have surfaced involving attorneys submitting court filings riddled with AI-generated hallucinations—plausible-sounding case citations that purport to support key legal propositions but are, in fact, entirely fictitious. As sanctions against attorneys mount, it seems clear there are a few kinks in the tech. Even AI tools designed specifically for lawyers can be prone to hallucinations.
In this episode, we look at the potential and risks of AI-assisted tech in law and policy with two Stanford Law researchers at the forefront of this issue: RegLab Director Professor Daniel Ho and JD/PhD student and computer science researcher Mirac Suzgun. Together with several co-authors, they examine the emerging risks in two recent papers, “Profiling Legal Hallucinations in Large Language Models” (Oxford Journal of Legal Analysis, 2024) and the forthcoming “Hallucination-Free?” in the Journal of Empirical Legal Studies. Ho and Suzgun offer new insights into how legal AI is working, where it’s failing, and what’s at stake.
Links:
Connect:
(00:00:00) Introduction to AI in Legal Education
(00:05:01) AI Tools in Legal Research and Writing
(00:12:01) Challenges of AI-Generated Content
(00:20:0) Reinforcement Learning with Human Feedback
(00:30:01) Audience Q&A
1,104 Listeners
360 Listeners
6,278 Listeners
3,486 Listeners
460 Listeners
25,785 Listeners
127 Listeners
2,278 Listeners
2,393 Listeners
17 Listeners
4,572 Listeners
5,676 Listeners
15,371 Listeners
171 Listeners
528 Listeners
7,031 Listeners