Machine Learning Street Talk (MLST)

The Secret Engine of AI - Prolific [Sponsored] (Sara Saab, Enzo Blindow)


Listen Later

We sat down with Sara Saab (VP of Product at Prolific) and Enzo Blindow (VP of Data and AI at Prolific) to explore the critical role of human evaluation in AI development and the challenges of aligning AI systems with human values. Prolific is a human annotation and orchestration platform for AI used by many of the major AI labs. This is a sponsored show in partnership with Prolific.


**SPONSOR MESSAGES**

cyber•Fund https://cyber.fund/?utm_source=mlst is a founder-led investment firm accelerating the cybernetic economy

Oct SF conference - https://dagihouse.com/?utm_source=mlst - Joscha Bach keynoting(!) + OAI, Anthropic, NVDA,++

Hiring a SF VC Principal: https://talent.cyber.fund/companies/cyber-fund-2/jobs/57674170-ai-investment-principal#content?utm_source=mlst

Submit investment deck: https://cyber.fund/contact?utm_source=mlst


While technologists want to remove humans from the loop for speed and efficiency, these non-deterministic AI systems actually require more human oversight than ever before. Prolific's approach is to put "well-treated, verified, diversely demographic humans behind an API" - making human feedback as accessible as any other infrastructure service.


When AI models like Grok 4 achieve top scores on technical benchmarks but feel awkward or problematic to use in practice, it exposes the limitations of our current evaluation methods. The guests argue that optimizing for benchmarks may actually weaken model performance in other crucial areas, like cultural sensitivity or natural conversation.


We also discuss Anthropic's research showing that frontier AI models, when given goals and access to information, independently arrived at solutions involving blackmail - without any prompting toward unethical behavior. Even more concerning, the more sophisticated the model, the more susceptible it was to this "agentic misalignment."


Enzo and Sarah present Prolific's "Humane" leaderboard as an alternative to existing benchmarking systems. By stratifying evaluations across diverse demographic groups, they reveal that different populations have vastly different experiences with the same AI models.


Looking ahead, the guests imagine a world where humans take on coaching and teaching roles for AI systems - similar to how we might correct a child or review code. This also raises important questions about working conditions and the evolution of labor in an AI-augmented world. Rather than replacing humans entirely, we may be moving toward more sophisticated forms of human-AI collaboration.


As AI tech becomes more powerful and general-purpose, the quality of human evaluation becomes more critical, not less. We need more representative evaluation frameworks that capture the messy reality of human values and cultural diversity.


Visit Prolific:

https://www.prolific.com/

Sara Saab (VP Product):

https://uk.linkedin.com/in/sarasaab


Enzo Blindow (VP Data & AI):

https://uk.linkedin.com/in/enzoblindow


TRANSCRIPT:

https://app.rescript.info/public/share/xZ31-0kJJ_xp4zFSC-bunC8-hJNkHpbm7Lg88RFcuLE


TOC:

[00:00:00] Intro & Background

[00:03:16] Human-in-the-Loop Challenges

[00:17:19] Can AIs Understand?

[00:32:02] Benchmarking & Vibes

[00:51:00] Agentic Misalignment Study

[01:03:00] Data Quality vs Quantity

[01:16:00] Future of AI Oversight


REFS:

Anthropic Agentic Misalignment

https://www.anthropic.com/research/agentic-misalignment


Value Compass

https://arxiv.org/pdf/2409.09586


Reasoning Models Don’t Always Say What They Think (Anthropic)

https://www.anthropic.com/research/reasoning-models-dont-say-think

https://assets.anthropic.com/m/71876fabef0f0ed4/original/reasoning_models_paper.pdf


Apollo research - science of evals blog post

https://www.apolloresearch.ai/blog/we-need-a-science-of-evals


Leaderboard Illusion

https://www.youtube.com/watch?v=9W_OhS38rIE MLST video


The Leaderboard Illusion [2025]

Shivalika Singh et al

https://arxiv.org/abs/2504.20879


(Truncated, full list on YT)



...more
View all episodesView all episodes
Download on the App Store

Machine Learning Street Talk (MLST)By Machine Learning Street Talk (MLST)

  • 4.7
  • 4.7
  • 4.7
  • 4.7
  • 4.7

4.7

90 ratings


More shows like Machine Learning Street Talk (MLST)

View all
Data Skeptic by Kyle Polich

Data Skeptic

479 Listeners

a16z Podcast by Andreessen Horowitz

a16z Podcast

1,092 Listeners

Super Data Science: ML & AI Podcast with Jon Krohn by Jon Krohn

Super Data Science: ML & AI Podcast with Jon Krohn

303 Listeners

NVIDIA AI Podcast by NVIDIA

NVIDIA AI Podcast

338 Listeners

Y Combinator Startup Podcast by Y Combinator

Y Combinator Startup Podcast

227 Listeners

Practical AI by Practical AI LLC

Practical AI

211 Listeners

ManifoldOne by Steve Hsu

ManifoldOne

93 Listeners

Google DeepMind: The Podcast by Hannah Fry

Google DeepMind: The Podcast

199 Listeners

Dwarkesh Podcast by Dwarkesh Patel

Dwarkesh Podcast

504 Listeners

Big Technology Podcast by Alex Kantrowitz

Big Technology Podcast

480 Listeners

No Priors: Artificial Intelligence | Technology | Startups by Conviction

No Priors: Artificial Intelligence | Technology | Startups

135 Listeners

This Day in AI Podcast by Michael Sharkey, Chris Sharkey

This Day in AI Podcast

210 Listeners

AI + a16z by a16z

AI + a16z

35 Listeners

Training Data by Sequoia Capital

Training Data

38 Listeners

Complex Systems with Patrick McKenzie (patio11) by Patrick McKenzie

Complex Systems with Patrick McKenzie (patio11)

133 Listeners