
Sign up to save your podcasts
Or


AI models are powerful, but they don’t forget. And that's a problem.
They hallucinate. They inherit bias. They absorb sensitive data. And once they’re trained, fixing those issues is painfully expensive. Retraining takes weeks and maybe tens of millions of dollars. And any guardrails the AI company puts up are brittle.
What if you could perform surgery on the model itself?
In this episode of TechFirst, John Koetsier sits down with Ben Luria, co-founder of Hirundo, to explore machine unlearning, a new approach that selectively removes unwanted data, behaviors, and vulnerabilities from trained AI systems.
Hirundo claims it can:
• Cut hallucinations in half
• Massively reduce bias
• Reduce successful prompt injection attacks by over 90%
• Do it in under an hour on a single GPU
• Preserve benchmark performance
Instead of adding more guardrails, machine unlearning works inside the model, identifying problematic weights, isolating behavioral vectors, and surgically removing risks without degrading quality.
If AI is going mainstream in enterprises, it needs a remediation layer. Is machine unlearning the missing piece?
⸻
Guest
Ben Luria
Co-Founder, HirundoNhir
https://www.hirundo.io
⸻
Topics Covered
• Why AI models “can’t forget”
• The difference between hallucinations and inaccuracies
• Why guardrails aren’t enough
• How prompt injection works — and how to reduce it
• Removing PII and noncompliant training data
• AI security at the model level
• Why machine unlearning could become standard by 2030
⸻
If you’re building, deploying, or investing in AI, this is a conversation you can’t miss.
👉 Subscribe for more deep dives into AI, innovation, and the future of tech:
https://techfirst.substack.com
⸻
⏱ Chapters
00:00 – Why We Need Machine Unlearning
01:12 – What Is Machine Unlearning?
03:40 – Why AI Can’t “Forget” (The Pink Elephant Problem)
06:15 – Guardrails vs True Model Remediation
09:05 – The Wild West of AI Data & Legal Risk
11:20 – How Machine Unlearning Works (Detection, Isolation, Remediation)
16:10 – Performing “Neurosurgery” on LLMs
19:30 – Hallucinations vs Inaccuracies Explained
23:45 – Reducing Prompt Injection by 90%
28:30 – Working with AI Labs & Enterprises
32:00 – Will Unlearning Become Standard by 2030?
34:15 – Final Thoughts
By John Koetsier4.7
1414 ratings
AI models are powerful, but they don’t forget. And that's a problem.
They hallucinate. They inherit bias. They absorb sensitive data. And once they’re trained, fixing those issues is painfully expensive. Retraining takes weeks and maybe tens of millions of dollars. And any guardrails the AI company puts up are brittle.
What if you could perform surgery on the model itself?
In this episode of TechFirst, John Koetsier sits down with Ben Luria, co-founder of Hirundo, to explore machine unlearning, a new approach that selectively removes unwanted data, behaviors, and vulnerabilities from trained AI systems.
Hirundo claims it can:
• Cut hallucinations in half
• Massively reduce bias
• Reduce successful prompt injection attacks by over 90%
• Do it in under an hour on a single GPU
• Preserve benchmark performance
Instead of adding more guardrails, machine unlearning works inside the model, identifying problematic weights, isolating behavioral vectors, and surgically removing risks without degrading quality.
If AI is going mainstream in enterprises, it needs a remediation layer. Is machine unlearning the missing piece?
⸻
Guest
Ben Luria
Co-Founder, HirundoNhir
https://www.hirundo.io
⸻
Topics Covered
• Why AI models “can’t forget”
• The difference between hallucinations and inaccuracies
• Why guardrails aren’t enough
• How prompt injection works — and how to reduce it
• Removing PII and noncompliant training data
• AI security at the model level
• Why machine unlearning could become standard by 2030
⸻
If you’re building, deploying, or investing in AI, this is a conversation you can’t miss.
👉 Subscribe for more deep dives into AI, innovation, and the future of tech:
https://techfirst.substack.com
⸻
⏱ Chapters
00:00 – Why We Need Machine Unlearning
01:12 – What Is Machine Unlearning?
03:40 – Why AI Can’t “Forget” (The Pink Elephant Problem)
06:15 – Guardrails vs True Model Remediation
09:05 – The Wild West of AI Data & Legal Risk
11:20 – How Machine Unlearning Works (Detection, Isolation, Remediation)
16:10 – Performing “Neurosurgery” on LLMs
19:30 – Hallucinations vs Inaccuracies Explained
23:45 – Reducing Prompt Injection by 90%
28:30 – Working with AI Labs & Enterprises
32:00 – Will Unlearning Become Standard by 2030?
34:15 – Final Thoughts

43,837 Listeners

32,246 Listeners

1,105 Listeners

566 Listeners

544 Listeners

87,868 Listeners

113,121 Listeners

837 Listeners

5,109 Listeners

12 Listeners

10,254 Listeners

5,576 Listeners

16,525 Listeners

141 Listeners

12 Listeners