
Sign up to save your podcasts
Or


Send us Fan Mail
ChatGPT can draft a motion in seconds, but what happens when the motion is polished nonsense and a real person signs it? We bring on Eran Kahana, a practicing attorney and Stanford Law School research fellow, to unpack a lawsuit that claims OpenAI caused harm by enabling AI generated court filings and effectively “doing law.” The story starts with a settlement, a case of buyer’s regret, and a flood of ChatGPT fueled motions that leave courts and opposing parties paying the price.
From there, we dig into the heart of legal AI ethics: hallucinated case citations, confident sounding errors, and why “it passed the bar” marketing can create dangerous expectations for everyday users. Eran makes the case that the better frame is often product liability, not unauthorized practice of law, because foundation model developers knowingly ship tools that can fabricate authority while still sounding right. We also talk about the practical reality inside law firms, where AI can save time when used for brainstorming, but can create real exposure when lawyers treat it like a research engine.
We close with the consequences and the future: Rule 11 sanctions, professional discipline, looming malpractice claims, and whether malpractice insurance even covers “delegating judgment to a machine.” Then we zoom out to AI governance and guardrails, including the idea of jurisdiction aware restrictions and stronger refusal modes for legal conclusions. If you care about legal tech, generative AI, and the future of legal practice, hit subscribe, share this with a lawyer friend, and leave a review so more people can find the show.
Although AI is not ready for the courtroom now, Eran says just wait. We won't even recognize "justice" a decade from now.
By Attorney Robert Sewell5
4545 ratings
Send us Fan Mail
ChatGPT can draft a motion in seconds, but what happens when the motion is polished nonsense and a real person signs it? We bring on Eran Kahana, a practicing attorney and Stanford Law School research fellow, to unpack a lawsuit that claims OpenAI caused harm by enabling AI generated court filings and effectively “doing law.” The story starts with a settlement, a case of buyer’s regret, and a flood of ChatGPT fueled motions that leave courts and opposing parties paying the price.
From there, we dig into the heart of legal AI ethics: hallucinated case citations, confident sounding errors, and why “it passed the bar” marketing can create dangerous expectations for everyday users. Eran makes the case that the better frame is often product liability, not unauthorized practice of law, because foundation model developers knowingly ship tools that can fabricate authority while still sounding right. We also talk about the practical reality inside law firms, where AI can save time when used for brainstorming, but can create real exposure when lawyers treat it like a research engine.
We close with the consequences and the future: Rule 11 sanctions, professional discipline, looming malpractice claims, and whether malpractice insurance even covers “delegating judgment to a machine.” Then we zoom out to AI governance and guardrails, including the idea of jurisdiction aware restrictions and stronger refusal modes for legal conclusions. If you care about legal tech, generative AI, and the future of legal practice, hit subscribe, share this with a lawyer friend, and leave a review so more people can find the show.
Although AI is not ready for the courtroom now, Eran says just wait. We won't even recognize "justice" a decade from now.

26,242 Listeners

8,539 Listeners

113,121 Listeners

6,097 Listeners

8,447 Listeners