YAAP (Yet Another AI Podcast)

The Hard Truths About AI Agents: Why Benchmarks Lie and Frameworks Fail


Listen Later

Building AI agents that actually work is harder than the hype suggests — and most people are doing it wrong. In this special "YAAP: Unplugged" episode (a live panel from AI Tinkerers meetup at the Hugging Face offices in Paris), Yuval sits down with Aymeric Roucher (Project Lead for Agents at Hugging Face) and Niv Granot (Algorithms Group Lead at AI21 Labs) for an unfiltered discussion about the uncomfortable realities of agent development.


Key Topics:

  1. Why current benchmarks are broken: From MMLU's limitations to RAG leaderboards that don't reflect real-world performance
  2. The tool use illusion: Why 95% accuracy on tool calling benchmarks doesn't mean your agent can actually plan
  3. LLM-as-a-judge problems: How evaluation bottlenecks are capping progress compared to verifiable domains like coding
  4. Framework: friend or foe? When to ditch LangChain, LlamaIndex, and why minimal implementations often work better
  5. The real agent stack: MCP, sandbox environments, and the four essential components you actually need
  6. Beyond the hype cycle: From embeddings that can't distinguish positive from negative numbers to what comes after agents

From FIFA World Cup benchmarks that expose retrieval failures to the circular dependency problem with LLM judges, this conversation cuts through the marketing noise to reveal what it really takes to build agents that solve real problems — not just impressive demos.

Warning: Contains unpopular opinions about popular frameworks and uncomfortable truths about the current state of AI agent development.

...more
View all episodesView all episodes
Download on the App Store

YAAP (Yet Another AI Podcast)By AI21