TechFirst with John Koetsier

Machine unlearning: AI's missing link?


Listen Later

AI models are powerful, but they don’t forget. And that's a problem.


They hallucinate. They inherit bias. They absorb sensitive data. And once they’re trained, fixing those issues is painfully expensive. Retraining takes weeks and maybe tens of millions of dollars. And any guardrails the AI company puts up are brittle.


What if you could perform surgery on the model itself?


In this episode of TechFirst, John Koetsier sits down with Ben Luria, co-founder of Hirundo, to explore machine unlearning, a new approach that selectively removes unwanted data, behaviors, and vulnerabilities from trained AI systems.


Hirundo claims it can:

• Cut hallucinations in half

• Massively reduce bias

• Reduce successful prompt injection attacks by over 90%

• Do it in under an hour on a single GPU

• Preserve benchmark performance


Instead of adding more guardrails, machine unlearning works inside the model, identifying problematic weights, isolating behavioral vectors, and surgically removing risks without degrading quality.


If AI is going mainstream in enterprises, it needs a remediation layer. Is machine unlearning the missing piece?



Guest


Ben Luria

Co-Founder, HirundoNhir

https://www.hirundo.io



Topics Covered

• Why AI models “can’t forget”

• The difference between hallucinations and inaccuracies

• Why guardrails aren’t enough

• How prompt injection works — and how to reduce it

• Removing PII and noncompliant training data

• AI security at the model level

• Why machine unlearning could become standard by 2030



If you’re building, deploying, or investing in AI, this is a conversation you can’t miss.


👉 Subscribe for more deep dives into AI, innovation, and the future of tech:

https://techfirst.substack.com



⏱ Chapters


00:00 – Why We Need Machine Unlearning

01:12 – What Is Machine Unlearning?

03:40 – Why AI Can’t “Forget” (The Pink Elephant Problem)

06:15 – Guardrails vs True Model Remediation

09:05 – The Wild West of AI Data & Legal Risk

11:20 – How Machine Unlearning Works (Detection, Isolation, Remediation)

16:10 – Performing “Neurosurgery” on LLMs

19:30 – Hallucinations vs Inaccuracies Explained

23:45 – Reducing Prompt Injection by 90%

28:30 – Working with AI Labs & Enterprises

32:00 – Will Unlearning Become Standard by 2030?

34:15 – Final Thoughts

...more
View all episodesView all episodes
Download on the App Store

TechFirst with John KoetsierBy John Koetsier

  • 4.7
  • 4.7
  • 4.7
  • 4.7
  • 4.7

4.7

14 ratings


More shows like TechFirst with John Koetsier

View all
Radiolab by WNYC Studios

Radiolab

43,825 Listeners

Freakonomics Radio by Freakonomics Radio + Stitcher

Freakonomics Radio

32,238 Listeners

The a16z Show by Andreessen Horowitz

The a16z Show

1,102 Listeners

Universe Today Podcast by Fraser Cain

Universe Today Podcast

565 Listeners

The Quanta Podcast by Quanta Magazine

The Quanta Podcast

548 Listeners

Pod Save America by Pod Save America

Pod Save America

87,991 Listeners

The Daily by The New York Times

The Daily

113,520 Listeners

GZERO World with Ian Bremmer by GZERO Media

GZERO World with Ian Bremmer

819 Listeners

Cautionary Tales with Tim Harford by Pushkin Industries

Cautionary Tales with Tim Harford

5,120 Listeners

All Things Photonics by All Things Photonics

All Things Photonics

12 Listeners

All-In with Chamath, Jason, Sacks & Friedberg by All-In Podcast, LLC

All-In with Chamath, Jason, Sacks & Friedberg

10,218 Listeners

Hard Fork by The New York Times

Hard Fork

5,594 Listeners

The Ezra Klein Show by New York Times Opinion

The Ezra Klein Show

16,536 Listeners

Open Circuit by Latitude Media

Open Circuit

141 Listeners

Sourcery by Sourcery with Molly O'Shea

Sourcery

10 Listeners