Law, disrupted

Defamation and AI


Listen Later

John is joined by Robert M. (“Bobby”) Schwartz, partner in Quinn Emanuel’s Los Angeles office and co-chair of the firm’s Media & Entertainment Industry Practice, and Marie M. Hayrapetian, associate in Quinn Emanuel’s Los Angeles office. They discuss recent cases testing whether large language model AI outputs may give rise to defamation claims.

In one recent Georgia case, a journalist asked ChatGPT about a lawsuit and received a response stating that a company executive was an embezzler, even though the lawsuit did not involve any such allegations and he was not an embezzler. In another case, Google was sued after its AI overview tool incorrectly stated that a business was being sued by the Minnesota state attorney general for deceptive practices, an allegation that allegedly caused up to $200 million in lost sales. Other examples involve sexualized deepfake images allegedly generated from ordinary photos, creating reputational and privacy harms.

Defamation law assumes a human speaker who publishes a false factual statement with some degree of fault. AI systems complicate that framework. In the case of LLM outputs, it is unclear who the speaker is. Is it the platform, the data scientists behind the platform, the user who created the prompt, or the model itself? It is also difficult to fit AI output into doctrines requiring intent, knowledge, or reckless disregard, especially in public figure cases that require proof of actual malice.

In the Georgia case, the defense won a motion for summary judgment. The court concluded that the output would not reasonably be understood as stating actual facts because the system provided warnings about limitations and potential errors. That reasoning may be vulnerable on appeal, but it shows one approach courts may adopt to reject these claims.

Republication may also result in liability. If someone republishes defamatory AI output as fact, ordinary defamation principles could apply. An unresolved issue is whether the Section 230 safe harbor protects platforms when AI output is generated through interactions between user prompts and the model.

Current defamation law might ultimately be a poor fit for AI-generated speech. Assessing liability for AI-generated speech may eventually require a different legal framework, such as product liability law.

Podcast Link: Law-disrupted.fm
Host: John B. Quinn 
Producer: Alexis Hyde
Music and Editing by: Alexander Rossi

...more
View all episodesView all episodes
Download on the App Store

Law, disruptedBy Law, disrupted

  • 4.7
  • 4.7
  • 4.7
  • 4.7
  • 4.7

4.7

67 ratings


More shows like Law, disrupted

View all
Masters in Business by Bloomberg

Masters in Business

2,172 Listeners

Odd Lots by Bloomberg

Odd Lots

1,995 Listeners

Bloomberg Law by Bloomberg

Bloomberg Law

381 Listeners

The Daily by The New York Times

The Daily

113,446 Listeners

Stay Tuned with Preet by Preet Bharara

Stay Tuned with Preet

32,352 Listeners

Interesting Times with Ross Douthat by New York Times Opinion

Interesting Times with Ross Douthat

7,237 Listeners

FT News Briefing by Financial Times

FT News Briefing

672 Listeners

Strict Scrutiny by Strict Scrutiny

Strict Scrutiny

5,849 Listeners

All-In with Chamath, Jason, Sacks & Friedberg by All-In Podcast, LLC

All-In with Chamath, Jason, Sacks & Friedberg

10,246 Listeners

The Ezra Klein Show by New York Times Opinion

The Ezra Klein Show

16,543 Listeners

Divided Argument by Will Baude, Dan Epps

Divided Argument

745 Listeners

In Good Company with Nicolai Tangen by Norges Bank Investment Management

In Good Company with Nicolai Tangen

187 Listeners

The Morgan Housel Podcast by Morgan Housel

The Morgan Housel Podcast

989 Listeners

Money Stuff: The Podcast by Bloomberg

Money Stuff: The Podcast

401 Listeners

Unhedged by Financial Times & Pushkin Industries

Unhedged

196 Listeners