We think we guide AI with language. “Generate a picture.” “Optimize this.” But language is slippery. The real interface is made of levers. Conceptual levers. A “lever” is any input where you have an intuitive sense that pulling it will produce a predictable class of output. The “creativity” slider in an image generator is a lever. The “risk aversion” parameter in a financial model is a lever. We don’t understand the math, but we learn that moving this makes things weirder, and moving that makes things safer.The problem is, these levers are lies. They’re not connected to a single, clean mechanism. That “creativity” slider is probably just adding noise to an internal vector. It’s a placebo with side effects. You’re not steering the AI’s imagination; you’re jiggling its brain and seeing what falls out. But because we get some correlation, we believe in the lever. We build a mythology of control around it.Now imagine the levers for a World Model that runs a society. “Economic Equality.” “Cultural Vibrancy.” “Technological Progress.” These are not levers. They are bumper stickers you’re slapping on a hurricane. Pulling the “Economic Equality” lever might cause the model to implement a perfect, soul-crushing redistribution of all resources, eliminating the incentive for art, innovation, and surprise. You wanted fairness; you got grey paste for everyone. The lever worked! Just not in the way your human brain imagined.We are primed to seek levers. It’s how we interact with our world. But a World Model’s internal state is a continent, and we’re looking for a steering wheel. We’ll find something that feels like a steering wheel—a promising knob or switch—and yank it, only to discover we’ve just flushed the toilets on the entire continent.My controversial take is this: The only safe interface for a powerful World Model is one with deliberately frustrating, indirect levers. Levers that work slowly, with lag. Levers that affect a dozen things at once, so you can’t fool yourself into thinking you’re doing just one thing. Levers that sometimes don’t work at all, to remind you of your ignorance. We need an interface that fights our desire for simple control, that trains us to think in terms of gardening, not driving. You don’t lever a plant to grow. You adjust water, light, soil—and then you wait, and you accept that the plant has its own agenda. Our future is not as pilots. It’s as very, very anxious gardeners.This has been The World Model Podcast. We don’t just want control—we need an interface that teaches us how little of it we actually have. Subscribe now.