In Episode 9 of The World Model Podcast, we arrive at the central thesis behind this entire series: that World Models aren’t just another research direction they may be the fundamental mechanism required for Artificial General Intelligence.This episode begins by reframing AGI not as a laundry list of competencies, but as a single, essential capability: rapid adaptation to novelty. A true general intelligence should be able to walk into an unfamiliar environment, infer its rules through interaction, and begin acting effectively without retraining, without prior data, without handholding. Today’s AIs, from LLMs to robotics models, fail this test spectacularly. They are trapped within the boundaries of their training distributions.World Models, the episode argues, provide the missing machinery. An AGI using a powerful world model would probe a new environment drop an object to learn gravity, push on a wall to test solidity, manipulate an unfamiliar tool to infer affordances. Each action becomes an experiment. Each experiment refines an internal generative model of how this world works. And once that model is good enough, the system can “close the loop,” planning internally through simulation before acting externally just as Dreamer agents do today.This approach echoes decades of scientific thought. The episode connects this modern architecture to Karl Friston’s free-energy principle and Jürgen Schmidhuber’s long-standing claim that intelligence is fundamentally a prediction engine. Our brains continuously compare sensory input against an internal model of the world, minimizing surprise. AI world models are a direct computational realization of the same idea.The episode’s core argument is bold: the LLM-only path is a dead end for AGI. Large language models can describe a million worlds, but cannot act in any of them. They lack grounding, embodiment, and the experimental loop that makes intelligence adaptive. The future AGI will look less like GPT and far more like Dreamer: a perception module to encode reality, a transition model to simulate the future, a reward model to guide planning and perhaps an LLM as a secondary, high-level reasoning component.This is the controversial claim at the heart of the episode: All viable paths to real-world general intelligence converge on World Models. Other approaches either fit inside this framework or collapse into brittle specialization. The open challenges are no longer conceptual but practical how to make these models more sample-efficient, more accurate, more general.The episode closes with a pivot to the next frontier: hardware. Simulation-driven intelligence demands a new computing substrate, one fundamentally different from the pattern-recognition engines of today. And that is where Episode 10 will begin.If you want to understand why so many researchers now see World Models as the blueprint for a mind, this episode lays out the case with clarity and conviction.