
Sign up to save your podcasts
Or


Think your AI prompts disappear when you hit delete? Not when litigation lands. We unpack the OpenAI copyright MDL to show how courts are turning ChatGPT conversation logs into core electronic evidence—preserved, sampled, de-identified, and produced under a protective order. The result is a clear, repeatable playbook for handling AI data at scale without letting privacy swallow relevance.
We walk through the emergency preservation orders that halted deletion across consumer, enterprise, and API logs, then explain why the parties settled on a 20 million chat sample and how de-identification pipelines strip direct identifiers while keeping prompts and outputs analyzable. Along the way, we tackle the big question of relevance: why usage patterns and non-infringing outputs matter for fair use factor four, market harm, and damages, and why a search-term-only approach can’t answer merits questions in a generative AI case.
You’ll hear the strategic pivots that shaped the fight—OpenAI’s attempt to narrow production after de-identifying the full sample, the court’s treatment of privacy as part of burden rather than a veto, and the denial of a stay that kept production on track. Then we distill three takeaways for legal teams: prompts are now squarely within the duty to preserve, the sample you propose will likely bind you later, and privacy is a dial you engineer through sampling, de-identification, and AEO protections.
Whether your organization uses ChatGPT, Copilot, Gemini, Claude, or in-house LLMs, this episode maps the practical steps: identify where logs live, understand tenant controls and exports, plan system-based discovery alongside key custodian evidence, and build credibility with numbers and workflows you can defend. Subscribe, share with your litigation and privacy teams, and leave a review telling us: how are you preparing your AI preserves and productions for 2026?
Thank you for tuning in to Meet and Confer with Kelly Twigger. If you found today’s discussion helpful, don’t forget to subscribe, rate, and leave a review wherever you get your podcasts. For more insights and resources on creating cost-effective discovery strategies leveraging ESI, visit Minerva26 and explore our practical tools, case law library, and on-demand education from the Academy.
By Kelly Twigger5
88 ratings
Think your AI prompts disappear when you hit delete? Not when litigation lands. We unpack the OpenAI copyright MDL to show how courts are turning ChatGPT conversation logs into core electronic evidence—preserved, sampled, de-identified, and produced under a protective order. The result is a clear, repeatable playbook for handling AI data at scale without letting privacy swallow relevance.
We walk through the emergency preservation orders that halted deletion across consumer, enterprise, and API logs, then explain why the parties settled on a 20 million chat sample and how de-identification pipelines strip direct identifiers while keeping prompts and outputs analyzable. Along the way, we tackle the big question of relevance: why usage patterns and non-infringing outputs matter for fair use factor four, market harm, and damages, and why a search-term-only approach can’t answer merits questions in a generative AI case.
You’ll hear the strategic pivots that shaped the fight—OpenAI’s attempt to narrow production after de-identifying the full sample, the court’s treatment of privacy as part of burden rather than a veto, and the denial of a stay that kept production on track. Then we distill three takeaways for legal teams: prompts are now squarely within the duty to preserve, the sample you propose will likely bind you later, and privacy is a dial you engineer through sampling, de-identification, and AEO protections.
Whether your organization uses ChatGPT, Copilot, Gemini, Claude, or in-house LLMs, this episode maps the practical steps: identify where logs live, understand tenant controls and exports, plan system-based discovery alongside key custodian evidence, and build credibility with numbers and workflows you can defend. Subscribe, share with your litigation and privacy teams, and leave a review telling us: how are you preparing your AI preserves and productions for 2026?
Thank you for tuning in to Meet and Confer with Kelly Twigger. If you found today’s discussion helpful, don’t forget to subscribe, rate, and leave a review wherever you get your podcasts. For more insights and resources on creating cost-effective discovery strategies leveraging ESI, visit Minerva26 and explore our practical tools, case law library, and on-demand education from the Academy.

9,539 Listeners

112,408 Listeners

25,156 Listeners

15,931 Listeners