Welcome back. The open-source movement has been the engine of software innovation for decades, democratizing access and accelerating progress through collaboration. But we are no longer just sharing word processors or web servers. We are approaching the point of sharing the architecture of understanding itself. Today, we confront the monumental dilemma: should the code for powerful World Models—the blueprints for simulating reality—be open to all, or locked away? Is this technology too dangerous to free?The argument for open-sourcing is powerful and principled. First, scientific progress. World Models for climate, biology, or materials science could accelerate solutions to existential threats if every researcher in the world can build upon them. Second, democratisation and oversight. Concentrating this power in a few corporate or state hands is a recipe for tyranny. Open models allow for public audit, for the discovery of biases, and for a diversity of applications that a single owner might never pursue. Third, safety. As the saying goes, 'given enough eyeballs, all bugs are shallow.' A global community might identify and patch dangerous failures or alignment issues faster than a closed team.But the arguments for closed, controlled development are terrifyingly concrete. This is the dual-use problem on steroids. A World Model that can simulate protein folding can also simulate novel pathogens or bioweapons. A model that can simulate financial markets can be used to engineer crashes or launder money at planetary scale. A social World Model is the ultimate tool for mass manipulation and psychological warfare.If such a model is open-sourced, there is no putting the genie back in the bottle. A malicious actor a rogue state, a terrorist group, a criminal syndicate could download it and fine-tune it for catastrophic ends with minimal resources. The barrier to entry for world-altering malice would collapse.We are already seeing this tension play out. The open-source release of powerful language models like Meta's LLaMA has sparked both an explosion of innovation and a wave of concern about generating misinformation and malware. For World Models, the stakes are orders of magnitude higher. We are not talking about generating toxic text; we are talking about generating blueprints for physical or social disruption.This leads to the concept of gradated release. Perhaps the core, dangerous 'transition model'—the engine of prediction—is kept under strict control, while interfaces and applications built on top are open. Or perhaps models are released with 'safety governors' hard-coded, like a car's governor that prevents it from exceeding a certain speed. But history shows that such restrictions are often hacked or removed.My controversial take is that the era of naive open-source altruism is over for foundational AI. We are playing with the intellectual equivalent of nuclear physics. The decision to open-source a powerful World Model cannot be made by a tech company's CEO or a research lab's director. It requires a new form of global governance. We need international treaties and verification regimes, akin to those for nuclear non-proliferation or chemical weapons, but for minds.The blueprint for reality is the ultimate strategic asset. Treating it like another JavaScript library is a path to chaos. We must invent a new model of responsible, transparent, but secure development for technologies that can simulate—and therefore manipulate—the foundations of our world.Not all uses of this power need be destructive or secretive. In our next episode, we explore its most beautiful application: as a new medium for human creativity.This has been The World Model Podcast. We analyze the politics of creation. Subscribe now."