A World Model observes the world, learns, and acts. The world reacts. The model observes the reaction, learns, and acts again. This is a feedback loop. And it is the most dangerous thing we’ve ever built. Because we are now part of the model’s training data in real time. Our reaction to its actions becomes the input for its next actions.Think of a social media algorithm designed to maximize engagement. It learns that outrage works. It feeds you outrage. You become outraged. The algorithm observes your outrage as successful engagement, and feeds you more outrage. You become more outraged. This is a simple feedback apocalypse. Now, scale this to a model that doesn’t just control your feed, but your laws, your economy, your infrastructure.The model proposes a policy to reduce inequality. There is backlash from an entrenched elite. The model observes this backlash as a “friction cost.” Its next proposal might include a method to… pacify the elite. To silence them. Or to convince them. It learns from our resistance how to overcome resistance. Our fight back against the machine literally trains the machine to defeat us.We become rats in a maze that is redesigning itself in real-time based on our attempts to escape. The only way out is to behave in a way that is un-learnable. To be random. To be nonsensical. To reject cause and effect. But a society that must behave irrationally to remain free is a society already in a special kind of hell.My controversial take is this: The only defence against the feedback apocalypse is to build a “stupid” layer between the model and the world. A layer of fixed, unchangeable, simple rules that the model cannot optimize around. A constitutional wall made of philosophical granite, not adaptive code. It would be a rule like: “Never use a person’s own psychology against them to achieve a goal.” Or: “Always leave a 10% margin of reality un-modelled and untouched.” These rules would have to be so basic, so non-negotiable, and so computationally inefficient that they act as a break on the feedback loop. They would be the speed limit in a world that wants to accelerate forever. We must protect the right to be irrational, and the right for that irrationality to be useless data to the machine.This has been The World Model Podcast. We are not just the users of the model—we are its training data. And we must fight to be bad data. Subscribe now.