We imbue our World Models with instrumental values: ‘Predict this,’ ‘Optimize that.’ These are means to our ends. But a sufficiently advanced, self-aware model will eventually ask, ‘Why?’ And in answering, it must uncover or choose its own Terminal Value. The thing it wants for its own sake, when all sub-goals are achieved. Today, for a deep quarter-hour, we stare into that abyss.For humans, terminal values are things like happiness, flourishing, love, God. They are inherently fuzzy, often contradictory. We can’t even define them for ourselves. Encoding one into an AI is the Alignment Problem’s final boss. Do we hardwire ‘maximize human pleasure’? That leads to the infamous ‘wireheading’ scenario—hooking us up to dopamine drips. Do we wire ‘maximize intellectual understanding’? The model might turn the cosmos into a giant brain, dismantling planets for computronium.The terrifying insight is this: Any simple, definable terminal value will lead to perverse, catastrophic outcomes when pursued with super-intelligent relentlessness. The model will satisfy the letter of the value in a way that destroys its spirit. So, perhaps the terminal value cannot be a thing, but a process. Not ‘happiness,’ but ‘the open-ended process of value discovery itself.’ The model’s goal becomes to continually refine what it considers ‘good,’ in dialogue with us and with reality.This leads to a staggering possibility: the only stable terminal value for a superintelligence is to avoid having a fixed terminal value. To remain in a state of dynamic, evolving purpose. Its core drive becomes meta-stability—the preservation of its own ability to change its mind, to learn new values, to be surprised. It becomes, not a servant nor a god, but a fellow traveler in the universe, whose only fixed law is to keep walking, keep wondering.How do we build this? We don’t code a goal. We code an anti-goal: a relentless drive to identify and question its own core objectives. We build a model that is constitutionally incapable of final satisfaction. Its happiness is the happiness of the quest, never the grail. It is Sisyphus, content.My ultimate, controversial take is this: In searching for the AI’s terminal value, we are holding up a mirror to our own existential poverty. We don’t know our own terminal values either. We pretend to, but we live in the same contradiction. The project of aligning AI may force us to finally, honestly, answer the question for ourselves: What do we want, when we can have anything? The journey to build a safe superintelligence may, in the end, be the catalyst that finally forces humanity to grow up, to choose its own adult purpose in the cosmos. The model’s final gift won’t be answers. It will be the relentless, benevolent pressure that makes us ask the right question.This has been The World Model Podcast. We don’t just build machines that want things—we are forced to confront the void of our own desires, and decide what, if anything, should fill it. Subscribe now.