Version Up

AI Benchmarking: Judging LegalTech on the Merits


Listen Later

Not being able to make apples-to-apples comparisons of AI tools is a major barrier to effective procurement and deployment of AI in legal. But it's also a problem for vendors who struggle to make better-performing products stand out from better-funded ones.  Anna Guo and Elgar Weijtmans, seek to solve this problem with Legal Benchmarks.

Anna and Elgar joined forces after independently discovering the same problem: legal teams are choosing AI tools on vibes, marketing, and who they know, not on evidence. Their research found that purpose-built legal AI tools were not reliably outperforming general-purpose models. So they developed the Legal AI Evaluation Framework, a community-sourced assessment of AI legal tools that gives buyers a structured, defensible procurement process.

Their message is two-fold. For legal enterprises and users, it's more obvious: being able to compare which AI tool is more accurate, safer, quicker will help them make better decision about which tool to buy (or whether to buy one at all). But as compelling is their value proposition for the vendors who sell these tools: improving transparency in the market allows the best products to rise to the top. 

It's a great collaboration that, like many I've encountered in legal tech, links up diverse capabilities, personalities, and geographies to try to solve an acute legal world problem with a technology-driven solution. I was excited to have them on the pod and look forward to seeing what their framework has in store for the legal industry.

https://www.legalbenchmarks.ai/ 

https://www.linkedin.com/in/anna-guo-255ba7b0/

https://www.linkedin.com/in/weijtmans/ 

 

 

...more
View all episodesView all episodes
Download on the App Store

Version UpBy Kaj Rozga