
Sign up to save your podcasts
Or


Modern AI has been dominated by one idea: predict the next token. But what if intelligence doesn’t have to work that way?
In this episode of The Neuron, we’re joined by Eve Bodnia, Founder and CEO of Logical Intelligence, to explore energy-based models (EBMs)—a radically different approach to AI reasoning that doesn’t rely on language, tokens, or next-word prediction.
With a background in theoretical physics and quantum information, Eve explains how EBMs operate over an energy landscape, allowing models to reason about many possible solutions at once rather than guessing sequentially. We discuss why this matters for tasks like spatial reasoning, planning, robotics, and safety-critical systems—and where large language models begin to show their limits.
You’ll learn:
What energy-based models are (in plain English)
Why token-free architectures change how AI reasons
How EBMs reduce hallucinations through constraints and verification
Why EBMs and LLMs may work best together, not in competition
What this approach reveals about the future of AI systems
To learn more about Eve’s work, visit https://logicalintelligence.com.
For more practical, grounded conversations on AI systems that actually work, subscribe to The Neuron newsletter at https://theneuron.ai.
By The Neuron4.8
6363 ratings
Modern AI has been dominated by one idea: predict the next token. But what if intelligence doesn’t have to work that way?
In this episode of The Neuron, we’re joined by Eve Bodnia, Founder and CEO of Logical Intelligence, to explore energy-based models (EBMs)—a radically different approach to AI reasoning that doesn’t rely on language, tokens, or next-word prediction.
With a background in theoretical physics and quantum information, Eve explains how EBMs operate over an energy landscape, allowing models to reason about many possible solutions at once rather than guessing sequentially. We discuss why this matters for tasks like spatial reasoning, planning, robotics, and safety-critical systems—and where large language models begin to show their limits.
You’ll learn:
What energy-based models are (in plain English)
Why token-free architectures change how AI reasons
How EBMs reduce hallucinations through constraints and verification
Why EBMs and LLMs may work best together, not in competition
What this approach reveals about the future of AI systems
To learn more about Eve’s work, visit https://logicalintelligence.com.
For more practical, grounded conversations on AI systems that actually work, subscribe to The Neuron newsletter at https://theneuron.ai.

348 Listeners

160 Listeners

216 Listeners

207 Listeners

162 Listeners

228 Listeners

668 Listeners

280 Listeners

108 Listeners

58 Listeners

88 Listeners

56 Listeners

61 Listeners

22 Listeners

59 Listeners