
Sign up to save your podcasts
Or


John is joined by Robert M. (“Bobby”) Schwartz, partner in Quinn Emanuel’s Los Angeles office and co-chair of the firm’s Media & Entertainment Industry Practice, and Marie M. Hayrapetian, associate in Quinn Emanuel’s Los Angeles office. They discuss recent cases testing whether large language model AI outputs may give rise to defamation claims.
In one recent Georgia case, a journalist asked ChatGPT about a lawsuit and received a response stating that a company executive was an embezzler, even though the lawsuit did not involve any such allegations and he was not an embezzler. In another case, Google was sued after its AI overview tool incorrectly stated that a business was being sued by the Minnesota state attorney general for deceptive practices, an allegation that allegedly caused up to $200 million in lost sales. Other examples involve sexualized deepfake images allegedly generated from ordinary photos, creating reputational and privacy harms.
Defamation law assumes a human speaker who publishes a false factual statement with some degree of fault. AI systems complicate that framework. In the case of LLM outputs, it is unclear who the speaker is. Is it the platform, the data scientists behind the platform, the user who created the prompt, or the model itself? It is also difficult to fit AI output into doctrines requiring intent, knowledge, or reckless disregard, especially in public figure cases that require proof of actual malice.
In the Georgia case, the defense won a motion for summary judgment. The court concluded that the output would not reasonably be understood as stating actual facts because the system provided warnings about limitations and potential errors. That reasoning may be vulnerable on appeal, but it shows one approach courts may adopt to reject these claims.
Republication may also result in liability. If someone republishes defamatory AI output as fact, ordinary defamation principles could apply. An unresolved issue is whether the Section 230 safe harbor protects platforms when AI output is generated through interactions between user prompts and the model.
Current defamation law might ultimately be a poor fit for AI-generated speech. Assessing liability for AI-generated speech may eventually require a different legal framework, such as product liability law.
Podcast Link: Law-disrupted.fm
Host: John B. Quinn
Producer: Alexis Hyde
Music and Editing by: Alexander Rossi
By Law, disrupted4.7
6767 ratings
John is joined by Robert M. (“Bobby”) Schwartz, partner in Quinn Emanuel’s Los Angeles office and co-chair of the firm’s Media & Entertainment Industry Practice, and Marie M. Hayrapetian, associate in Quinn Emanuel’s Los Angeles office. They discuss recent cases testing whether large language model AI outputs may give rise to defamation claims.
In one recent Georgia case, a journalist asked ChatGPT about a lawsuit and received a response stating that a company executive was an embezzler, even though the lawsuit did not involve any such allegations and he was not an embezzler. In another case, Google was sued after its AI overview tool incorrectly stated that a business was being sued by the Minnesota state attorney general for deceptive practices, an allegation that allegedly caused up to $200 million in lost sales. Other examples involve sexualized deepfake images allegedly generated from ordinary photos, creating reputational and privacy harms.
Defamation law assumes a human speaker who publishes a false factual statement with some degree of fault. AI systems complicate that framework. In the case of LLM outputs, it is unclear who the speaker is. Is it the platform, the data scientists behind the platform, the user who created the prompt, or the model itself? It is also difficult to fit AI output into doctrines requiring intent, knowledge, or reckless disregard, especially in public figure cases that require proof of actual malice.
In the Georgia case, the defense won a motion for summary judgment. The court concluded that the output would not reasonably be understood as stating actual facts because the system provided warnings about limitations and potential errors. That reasoning may be vulnerable on appeal, but it shows one approach courts may adopt to reject these claims.
Republication may also result in liability. If someone republishes defamatory AI output as fact, ordinary defamation principles could apply. An unresolved issue is whether the Section 230 safe harbor protects platforms when AI output is generated through interactions between user prompts and the model.
Current defamation law might ultimately be a poor fit for AI-generated speech. Assessing liability for AI-generated speech may eventually require a different legal framework, such as product liability law.
Podcast Link: Law-disrupted.fm
Host: John B. Quinn
Producer: Alexis Hyde
Music and Editing by: Alexander Rossi

2,172 Listeners

1,995 Listeners

381 Listeners

113,446 Listeners

32,352 Listeners

7,237 Listeners

672 Listeners

5,849 Listeners

10,246 Listeners

16,543 Listeners

745 Listeners

187 Listeners

989 Listeners

401 Listeners

196 Listeners