
Sign up to save your podcasts
Or


In this episode of Machine Minds, we step beyond today’s transformer-dominated AI landscape and into a deeper conversation about what’s missing on the path to truly autonomous, long-horizon intelligence. Jacob Buckman, co-founder and CEO of Manifest AI, joins Greg to explore why current AI systems struggle with long-term reasoning, persistent memory, and extended task execution—and what it will take to unlock the next paradigm.
Jacob’s journey into AI began early, fueled by science fiction, programming, and a fascination with building systems that could do meaningful work autonomously. From studying and conducting research at Carnegie Mellon to working at Google Brain, he watched deep learning unify once-fragmented AI subfields—vision, language, speech—under a single scalable framework. That unification shaped his conviction that the next breakthrough wouldn’t come from incremental tuning, but from rethinking a fundamental architectural bottleneck.
At Manifest AI, Jacob and his team are tackling what they believe is the missing piece: scalable long-context intelligence. Their work centers on replacing transformer attention with a new family of architectures called retention models, designed to compress and retain relevant information over time—rather than repeatedly replaying massive histories. The goal: AI systems that can reason, learn, and work continuously over hours, days, or longer.
In this conversation, Greg and Jacob explore:
If you’re building AI systems, researching foundations of intelligence, or trying to understand what comes after today’s models, this episode offers a rare, deeply reasoned look at where the field may be heading—and why architectural simplicity could unlock far more than brute force scale.
Learn more about Manifest AI: https://manifestai.com
Explore the open-source retention models: pip install retention
Connect with Jacob Buckman on LinkedIn: https://www.linkedin.com/in/jacobbuckman
Connect with Greg Toroosian on Linkedin: https://www.linkedin.com/in/gregtoroosian
By Greg ToroosianIn this episode of Machine Minds, we step beyond today’s transformer-dominated AI landscape and into a deeper conversation about what’s missing on the path to truly autonomous, long-horizon intelligence. Jacob Buckman, co-founder and CEO of Manifest AI, joins Greg to explore why current AI systems struggle with long-term reasoning, persistent memory, and extended task execution—and what it will take to unlock the next paradigm.
Jacob’s journey into AI began early, fueled by science fiction, programming, and a fascination with building systems that could do meaningful work autonomously. From studying and conducting research at Carnegie Mellon to working at Google Brain, he watched deep learning unify once-fragmented AI subfields—vision, language, speech—under a single scalable framework. That unification shaped his conviction that the next breakthrough wouldn’t come from incremental tuning, but from rethinking a fundamental architectural bottleneck.
At Manifest AI, Jacob and his team are tackling what they believe is the missing piece: scalable long-context intelligence. Their work centers on replacing transformer attention with a new family of architectures called retention models, designed to compress and retain relevant information over time—rather than repeatedly replaying massive histories. The goal: AI systems that can reason, learn, and work continuously over hours, days, or longer.
In this conversation, Greg and Jacob explore:
If you’re building AI systems, researching foundations of intelligence, or trying to understand what comes after today’s models, this episode offers a rare, deeply reasoned look at where the field may be heading—and why architectural simplicity could unlock far more than brute force scale.
Learn more about Manifest AI: https://manifestai.com
Explore the open-source retention models: pip install retention
Connect with Jacob Buckman on LinkedIn: https://www.linkedin.com/in/jacobbuckman
Connect with Greg Toroosian on Linkedin: https://www.linkedin.com/in/gregtoroosian