
Sign up to save your podcasts
Or


Almost one in five lawyers are using AI, according to an American Bar Association survey. But there are a growing number of legal horror stories involving tools like ChatGPT, because chatbots have a tendency to make stuff up — such as legal precedents from cases that never happened. Marketplace’s Meghan McCarty Carino spoke with Daniel Ho at Stanford’s Institute for Human-Centered Artificial Intelligence about the group’s recent study on how frequently three of the most popular language models from ChatGPT, Meta and Google hallucinate when asked to weigh in or assist with legal cases.
By Marketplace4.5
12561,256 ratings
Almost one in five lawyers are using AI, according to an American Bar Association survey. But there are a growing number of legal horror stories involving tools like ChatGPT, because chatbots have a tendency to make stuff up — such as legal precedents from cases that never happened. Marketplace’s Meghan McCarty Carino spoke with Daniel Ho at Stanford’s Institute for Human-Centered Artificial Intelligence about the group’s recent study on how frequently three of the most popular language models from ChatGPT, Meta and Google hallucinate when asked to weigh in or assist with legal cases.

32,230 Listeners

30,656 Listeners

8,790 Listeners

935 Listeners

1,386 Listeners

1,652 Listeners

2,177 Listeners

5,486 Listeners

113,450 Listeners

56,968 Listeners

9,560 Listeners

10,331 Listeners

3,619 Listeners

6,100 Listeners

6,584 Listeners

6,461 Listeners

163 Listeners

2,991 Listeners

154 Listeners

1,384 Listeners

91 Listeners