
Sign up to save your podcasts
Or


As ChatGPT and other generative AI platforms have taken off, they’ve demonstrated exciting possibilities about the potential benefits of artificial intelligence; while at the same time, have raised a myriad of open questions and complexities, from how to regulate the pace of AI’s growth, to whether AI companies can be held liable for any misinformation reported or generated through the platforms. Earlier this week, the first ever AI defamation lawsuit was filed, by a Georgia radio host who claims that ChatGPT falsely accused him of embezzling money. The case presents new and never-before answered legal questions, including what happens if AI reports false and damaging information about a real person? Should that person be able to sue the AI’s creator for defamation? In this episode two leading First Amendment scholars—Eugene Volokh of UCLA Law and Lyrissa Lidsky of the University of Florida Law School—join to explore the emerging legal issues surrounding artificial intelligence and the First Amendment. They discuss whether AI has constitutional rights; who if anyone can be sued when AI makes up or mistakes information; whether artificial intelligence might lead to new doctrines regarding regulation of online speech; and more.
Resources:
Questions or comments about the show? Email us at [email protected].
Continue today’s conversation on Facebook and Twitter using @ConstitutionCtr.
Sign up to receive Constitution Weekly, our email roundup of constitutional news and debate, at bit.ly/constitutionweekly.
By National Constitution Center4.6
10811,081 ratings
As ChatGPT and other generative AI platforms have taken off, they’ve demonstrated exciting possibilities about the potential benefits of artificial intelligence; while at the same time, have raised a myriad of open questions and complexities, from how to regulate the pace of AI’s growth, to whether AI companies can be held liable for any misinformation reported or generated through the platforms. Earlier this week, the first ever AI defamation lawsuit was filed, by a Georgia radio host who claims that ChatGPT falsely accused him of embezzling money. The case presents new and never-before answered legal questions, including what happens if AI reports false and damaging information about a real person? Should that person be able to sue the AI’s creator for defamation? In this episode two leading First Amendment scholars—Eugene Volokh of UCLA Law and Lyrissa Lidsky of the University of Florida Law School—join to explore the emerging legal issues surrounding artificial intelligence and the First Amendment. They discuss whether AI has constitutional rights; who if anyone can be sued when AI makes up or mistakes information; whether artificial intelligence might lead to new doctrines regarding regulation of online speech; and more.
Resources:
Questions or comments about the show? Email us at [email protected].
Continue today’s conversation on Facebook and Twitter using @ConstitutionCtr.
Sign up to receive Constitution Weekly, our email roundup of constitutional news and debate, at bit.ly/constitutionweekly.

8,476 Listeners

4,085 Listeners

3,560 Listeners

2,023 Listeners

146 Listeners

6,320 Listeners

2,554 Listeners

2,390 Listeners

32,378 Listeners

7,266 Listeners

5,856 Listeners

3,942 Listeners

16,436 Listeners

744 Listeners

620 Listeners