
Sign up to save your podcasts
Or
It's hard to imagine doing web research without using LLMs. Chatbots may be the first thing you turn to for questions like: What are the companies currently working on nuclear fusion and who invested in them? What is the performance gap between open and closed-weight models on the MMLU benchmark? Is there really a Tesla Model H?
So which LLMs, and which "Search", "Research", "Deep Search" or "Deep Research" branded products, are best? How good are their epistemics, compared to if you did the web research yourself?
Last month we (FutureSearch) published Deep Research Bench (DRB), a benchmark designed to evaluate LLMs agents on difficult web research tasks using frozen snapshots of the internet. In this post, we're going to share the non-obvious findings, suggestions and failure modes that we think might be useful to anyone who uses LLMs with web search enabled.
tl;dr
---
Outline:
(01:09) tl;dr
(02:19) What We Did
(04:01) What We Found
(04:05) CommercialWeb Research Tools
(04:40) The Good
(06:04) The Mediocre
(06:33) The Bad
(07:06) Regular vs. Deep Research Mode
(07:55) Using LLMs With Agents via the API
(10:42) Open vs. Closed Models
(11:22) Thinking vs. Non-Thinking Models
(11:52) Epistemic concerns with LLM-powered web research tools
(12:59) Conclusions
The original text contained 2 footnotes which were omitted from this narration.
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
It's hard to imagine doing web research without using LLMs. Chatbots may be the first thing you turn to for questions like: What are the companies currently working on nuclear fusion and who invested in them? What is the performance gap between open and closed-weight models on the MMLU benchmark? Is there really a Tesla Model H?
So which LLMs, and which "Search", "Research", "Deep Search" or "Deep Research" branded products, are best? How good are their epistemics, compared to if you did the web research yourself?
Last month we (FutureSearch) published Deep Research Bench (DRB), a benchmark designed to evaluate LLMs agents on difficult web research tasks using frozen snapshots of the internet. In this post, we're going to share the non-obvious findings, suggestions and failure modes that we think might be useful to anyone who uses LLMs with web search enabled.
tl;dr
---
Outline:
(01:09) tl;dr
(02:19) What We Did
(04:01) What We Found
(04:05) CommercialWeb Research Tools
(04:40) The Good
(06:04) The Mediocre
(06:33) The Bad
(07:06) Regular vs. Deep Research Mode
(07:55) Using LLMs With Agents via the API
(10:42) Open vs. Closed Models
(11:22) Thinking vs. Non-Thinking Models
(11:52) Epistemic concerns with LLM-powered web research tools
(12:59) Conclusions
The original text contained 2 footnotes which were omitted from this narration.
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
26,469 Listeners
2,395 Listeners
7,953 Listeners
4,142 Listeners
89 Listeners
1,472 Listeners
9,207 Listeners
88 Listeners
417 Listeners
5,461 Listeners
15,321 Listeners
482 Listeners
121 Listeners
75 Listeners
461 Listeners