
Sign up to save your podcasts
Or


Almost one in five lawyers are using AI, according to an American Bar Association survey. But there are a growing number of legal horror stories involving tools like ChatGPT, because chatbots have a tendency to make stuff up — such as legal precedents from cases that never happened. Marketplace’s Meghan McCarty Carino spoke with Daniel Ho at Stanford’s Institute for Human-Centered Artificial Intelligence about the group’s recent study on how frequently three of the most popular language models from ChatGPT, Meta and Google hallucinate when asked to weigh in or assist with legal cases.
By Marketplace4.4
7777 ratings
Almost one in five lawyers are using AI, according to an American Bar Association survey. But there are a growing number of legal horror stories involving tools like ChatGPT, because chatbots have a tendency to make stuff up — such as legal precedents from cases that never happened. Marketplace’s Meghan McCarty Carino spoke with Daniel Ho at Stanford’s Institute for Human-Centered Artificial Intelligence about the group’s recent study on how frequently three of the most popular language models from ChatGPT, Meta and Google hallucinate when asked to weigh in or assist with legal cases.

30,625 Listeners

8,790 Listeners

936 Listeners

1,390 Listeners

1,288 Listeners

3,232 Listeners

1,719 Listeners

9,732 Listeners

1,649 Listeners

5,482 Listeners

113,307 Listeners

1,448 Listeners

9,551 Listeners

10 Listeners

35 Listeners

5,592 Listeners

16,508 Listeners