
Sign up to save your podcasts
Or


Two federal rulings issued the same week just redrew the map for AI, discovery, and privilege. We break down how Warner v. Gilbarco framed ChatGPT as a drafting tool and shielded a pro se litigant’s prompts and outputs, while U.S. v. Heppner denied privilege where a platform’s privacy policy allowed training and disclosure. The contrast is stark and deeply practical: facts, platform settings, and attorney involvement now drive whether AI-generated content is protected or exposed.
We walk through what made the difference in each case—timing under Rule 26, the line between internal thought processes and actual documents, and whether counsel directed the AI work. Then we zoom into the privacy and confidentiality layer: why consumer AI settings can undermine privilege, how enterprise copilots promise stronger safeguards, and where platform policies can make or break your arguments. Along the way, we surface key quotes from the bench, including the “AI is a tool, not a person” framing and the warning that broad waiver theories would gut work product in modern drafting environments.
To help teams act now, we share concrete steps: update custodian interviews to capture AI usage; set retention and logging rules for prompts and outputs; choose enterprise configurations that disable training; and document attorney direction when AI assists with strategy. We also flag the unresolved questions—what counts as ESI, how to handle prompt discovery requests, and what duties vendors have to preserve AI interactions—so you can anticipate challenges before they surface in meet-and-confers.
If you’re advising clients who touch ChatGPT, Claude, Gemini, or Microsoft Copilot, this conversation is your primer on privilege, confidentiality, and eDiscovery in the age of generative AI. Subscribe, share with your team, and leave a review with your take: should AI-assisted drafts be treated like any other protected work product, or is the risk of disclosure too high without new rules?
Thank you for tuning in to Meet and Confer with Kelly Twigger. If you found today’s discussion helpful, don’t forget to subscribe, rate, and leave a review wherever you get your podcasts. For more insights and resources on creating cost-effective discovery strategies leveraging ESI, visit Minerva26 and explore our practical tools, case law library, and on-demand education from the Academy.
By Kelly Twigger5
88 ratings
Two federal rulings issued the same week just redrew the map for AI, discovery, and privilege. We break down how Warner v. Gilbarco framed ChatGPT as a drafting tool and shielded a pro se litigant’s prompts and outputs, while U.S. v. Heppner denied privilege where a platform’s privacy policy allowed training and disclosure. The contrast is stark and deeply practical: facts, platform settings, and attorney involvement now drive whether AI-generated content is protected or exposed.
We walk through what made the difference in each case—timing under Rule 26, the line between internal thought processes and actual documents, and whether counsel directed the AI work. Then we zoom into the privacy and confidentiality layer: why consumer AI settings can undermine privilege, how enterprise copilots promise stronger safeguards, and where platform policies can make or break your arguments. Along the way, we surface key quotes from the bench, including the “AI is a tool, not a person” framing and the warning that broad waiver theories would gut work product in modern drafting environments.
To help teams act now, we share concrete steps: update custodian interviews to capture AI usage; set retention and logging rules for prompts and outputs; choose enterprise configurations that disable training; and document attorney direction when AI assists with strategy. We also flag the unresolved questions—what counts as ESI, how to handle prompt discovery requests, and what duties vendors have to preserve AI interactions—so you can anticipate challenges before they surface in meet-and-confers.
If you’re advising clients who touch ChatGPT, Claude, Gemini, or Microsoft Copilot, this conversation is your primer on privilege, confidentiality, and eDiscovery in the age of generative AI. Subscribe, share with your team, and leave a review with your take: should AI-assisted drafts be treated like any other protected work product, or is the risk of disclosure too high without new rules?
Thank you for tuning in to Meet and Confer with Kelly Twigger. If you found today’s discussion helpful, don’t forget to subscribe, rate, and leave a review wherever you get your podcasts. For more insights and resources on creating cost-effective discovery strategies leveraging ESI, visit Minerva26 and explore our practical tools, case law library, and on-demand education from the Academy.