
Sign up to save your podcasts
Or
Almost one in five lawyers are using AI, according to an American Bar Association survey. But there are a growing number of legal horror stories involving tools like ChatGPT, because chatbots have a tendency to make stuff up — such as legal precedents from cases that never happened. Marketplace’s Meghan McCarty Carino spoke with Daniel Ho at Stanford’s Institute for Human-Centered Artificial Intelligence about the group’s recent study on how frequently three of the most popular language models from ChatGPT, Meta and Google hallucinate when asked to weigh in or assist with legal cases.
4.5
12341,234 ratings
Almost one in five lawyers are using AI, according to an American Bar Association survey. But there are a growing number of legal horror stories involving tools like ChatGPT, because chatbots have a tendency to make stuff up — such as legal precedents from cases that never happened. Marketplace’s Meghan McCarty Carino spoke with Daniel Ho at Stanford’s Institute for Human-Centered Artificial Intelligence about the group’s recent study on how frequently three of the most popular language models from ChatGPT, Meta and Google hallucinate when asked to weigh in or assist with legal cases.
6,054 Listeners
882 Listeners
8,639 Listeners
30,915 Listeners
1,356 Listeners
32,202 Listeners
43,391 Listeners
2,169 Listeners
5,495 Listeners
1,444 Listeners
9,552 Listeners
3,595 Listeners
6,250 Listeners
163 Listeners
2,677 Listeners
1,323 Listeners
1,585 Listeners
82 Listeners
221 Listeners