Unsubject by Simon

If We Cannot Enter the Mind of a Bat, How Can a Computer Enter Ours?


Listen Later

Imagine you are tasked with building a perfect simulation of a bat.

You have access to everything science knows about bat cognition: the mechanics of echolocation, the neural architecture that processes sonar signals, the frequency ranges the bat uses to navigate in darkness, to locate prey, to avoid obstacles at speed. You can model the communication system with exquisite fidelity. You can simulate bat calls, bat responses, bat social dynamics. Your simulation is, by any measurable standard, indistinguishable from the real thing.

But would you know what it is like to be a bat?

The philosopher Thomas Nagel posed this question in his 1974 paper “What Is It Like to Be a Bat?” and the answer, he argued, is no. Not because the simulation lacks data. Not because the model lacks sophistication. But because the bat’s experience of the world — the first-person, felt quality of perceiving through echolocation — is constituted by a biological substrate that no simulation, however complete, can reproduce from the outside. The map, no matter how detailed, is not the territory.

This thought experiment, which Nagel intended as a contribution to the philosophy of mind, has become unexpectedly urgent in 2026. In December 2025, Yann LeCun, one of the three “godfathers of deep learning” and a Turing Award laureate, left Meta after twelve years to found AMI Labs — Advanced Machine Intelligence — reportedly seeking five hundred million euros in pre-launch funding at a valuation of three billion. His thesis, stated without qualification: large language models are a dead end. They perform at the level of language. They do not understand the world.

LeCun is right about that, but not quite for the reasons he gives. The deeper issue is not simply that LLMs lack physical grounding, or that they have been trained on text rather than video, or that they cannot plan or maintain persistent memory. These are real limitations, and the world model research now consuming serious investment at DeepMind, at Runway, at World Labs, and at LeCun’s own new venture is genuinely aimed at addressing them. The deeper issue is epistemological. It concerns what we mean by knowledge, and what part of human knowledge is, in principle, inaccessible to any simulation.

In 1966, the philosopher and physical chemist Michael Polanyi published The Tacit Dimension, in which he articulated an observation deceptively simple in its formulation: we can know more than we can tell. Tacit knowledge — the kind that underlies riding a bicycle, recognising a face, knowing when a sentence sounds wrong, sensing that a negotiation is going badly — resists codification. You cannot transfer it by writing it down, because the act of articulation necessarily leaves something out. The knowledge lives in the doing, not in the description of the doing.

Polanyi’s Paradox, as the economist David Autor later named it, became a canonical explanation for why automation was not consuming all human labour as fast as theorists had predicted. The tasks hardest to automate were not the complex, symbolic, high-status ones — chess, mathematics, legal reasoning. Those turned out to be relatively tractable. The tasks hardest to automate were the ones so basic humans never thought of them as knowledge at all: walking on uneven ground, folding a towel, reading a room.

The standard account of why LLMs represent a partial breakthrough against Polanyi’s Paradox goes something like this: because LLMs learn from patterns in unstructured data rather than from explicit rules, they can acquire a form of tacit knowledge indirectly. They learn what sounds like good legal argument not from a rulebook but from the accumulated record of what winning lawyers have written. They learn what a persuasive paragraph feels like not from a style guide but from the entire corpus of human persuasion. This is genuine progress.

But it mistakes the boundary of what has been solved. Tacit knowledge is not simply knowledge that happens to resist explicit articulation. It is knowledge that arises from physical-chemical interaction — from the lived process of a body navigating a world that can damage or destroy it. When an LLM produces a persuasive paragraph, it is reproducing the output of that process. It is not reproducing the process itself. The distinction matters, and it matters in a way that cannot be dissolved by scale or architectural refinement alone.

Language is a tool. It is not the totality of human cognition, and it was never meant to be.

Consider what I mean by this. Human cognition includes, in its weak form, sensorimotor experience — the felt sense of a body moving through space, the proprioceptive knowledge of where your limbs are, the way smell triggers memory, the way taste encodes aversion and desire. None of this is language. None of this is accessible to a model that processes only tokens. An LLM trained on every description of pain ever written does not know what pain is. It knows what people say about pain, which is a profoundly different thing.

This is the weak form of what I want to call language-plus: the residue of human cognition that exceeds language, defined by the full range of embodied, multisensory experience. This is essentially what LeCun is pointing at when he argues that a four-year-old has processed fifty times more information than the largest language model — not in text, but through the optic nerve alone, at one megabyte per second across sixteen thousand waking hours.

But there is a strong form of the argument, and it is more radical.

Human cognition is not merely shaped by embodiment in the sense of having additional sensory inputs. It is constitutively biological. What we call common sense, intuition, judgment — the things most resistant to automation — are not simply the accumulation of sensorimotor experience. They are the output of an organism whose entire architecture is oriented toward survival, whose emotional states are produced by the endocrine system, whose social intuitions were calibrated by millions of years of evolution, whose sense of risk is encoded in the amygdala before it ever reaches conscious deliberation.

The endocrine system is not a peripheral module of human cognition. It is part of its substrate. When cortisol floods the system under threat, it changes what you perceive, what you remember, what you decide. When oxytocin is present, you trust differently. When dopamine fires, you learn. Antonio Damasio’s somatic marker hypothesis — the idea that emotion, grounded in bodily states, is not opposed to rationality but constitutive of it — is a serious scientific claim that has accumulated substantial empirical support. The body does not merely deliver inputs to a brain that then reasons. The body is part of the reasoning.

This is the strong form of language-plus: not just the sensory residue, but the entire biological substrate of cognition — evolutionary, neurological, endocrinological — that cannot be captured by any model of what humans say or write or even consciously think.

Functionalist might object: if the simulation produces the same outputs as the biological system, why does the substrate matter? If an AI system reasons as if it has skin in the game — if its loss function mimics the structure of a survival imperative — is there a meaningful sense in which it does not?

This objection needs a serious answer. Let me give one.

Models and biological organisms do not belong to the same dimension of existence. This is not biological essentialism, a claim that carbon is special, or that only neurons can think. It is a claim about the categorical difference between two kinds of systems and the conditions under which each arises and operates.

Models benefit from centralization, scale, and the complexity that produces emergent behavior. The larger the model, the more data it has processed, the more sophisticated its outputs. And crucially, a single model can be replicated identically across any number of instances. Its “knowledge” is encoded in weights that are the same in every copy. This is a genuinely new kind of thing in the history of intelligence: a system that exceeds any individual human in the breadth of what it has processed, precisely because it aggregates across the experience of millions of people without being any of them.

Biological organisms work on entirely different principles. Evolution does not optimize for a single universal solution. It produces diversity — populations of individuals each adapted to specific constraints, each carrying a slightly different version of the genome, each living a singular life that cannot be copied or merged. The “knowledge” of a biological organism is not stored in weights. It is generated, continuously, through the interaction of a particular body with a particular environment over a particular lifetime. It cannot be replicated because it is not a file. It is a process.

This is where Polanyi’s deeper insight connects. Tacit knowledge is not merely knowledge that happens to be hard to articulate. It is knowledge that is inseparable from the physical-chemical process that generated it, from the specific history of a body that has been threatened, fed, injured, bonded, and bereaved. A simulated survival imperative is categorically different from a biological one not because of the material it runs on, but because of what is at stake. A biological organism that gets the threat assessment wrong dies. Its loss function is not programmed. It is the condition of its existence. Finitude and irreversibility are not features of the biological system. They are its ground.

The functionalist argument — if the outputs are the same, the substrate doesn’t matter — concedes more than it intends. It shifts the claim from “AI understands” to “AI approximates understanding closely enough for practical purposes.” That is a legitimate and important claim. But it is a different claim. And it is importantly silent on what happens when the approximation fails, when the system encounters a situation that falls outside the distribution of its training, or when the stakes of the decision are precisely the kind that require a subject who lives with the consequences.

There is a further point worth making explicit. The three questions that the essay has been circling — does AI have first-person experience? does AI understand in the way humans understand? does AI have functional competence in embodied domains? — are distinct questions, but under the strong form of language-plus they are not three separate problems. If cognition is a phenomenon arising from physical-chemical interaction, then the epistemic is already embodied. There is no “knower” standing apart from the known — only a particular kind of body, in a particular kind of world, for whom certain things matter because its survival depends on them. “Function” in the deep sense implies purpose, and purpose implies a subject for whom something is purposeful. That subject is exactly what a biological organism is and a model is not. What AI systems have is not function in this sense but behavioral approximation. which is enormously useful, and genuinely impressive, and not the same thing.

Now return to the bat.

The bat’s echolocation system is, in one sense, a communication and navigation technology. You can model it. You can simulate its outputs. You can build an AI that predicts, with great accuracy, what a bat would “say” in a given environment. But the bat’s experience of echolocation is not separable from the bat’s biology — the particular architecture of its cochlea, its auditory cortex, its nervous system, its evolutionary history of predation and evasion.

In a word, to know what it is like to be a bat, you would need to be a bat. Or something whose biology instantiates the same kind of first-person experience.

This is not a mystical claim. It is a precise one. The phenomenological layer — what philosophers call qualia, the felt quality of experience — is not a further piece of information that a simulation could in principle capture if it only had more data. It is a property of a process: the ongoing physical-chemical process of a living body navigating a world in which it can die. You can simulate the map. You cannot, from the outside, simulate what it is like to inhabit the territory.

LLMs, as models of simulation of our language layer, are more capable than any individual human being, given that they were trained on data so massive that no single person could absorb it in several lifetimes. But they cannot fully simulate any individual, because no model could possess the totality of a particular person’s experience — and because individual human cognition is generated, not stored. It emerges from a particular biological history, a particular body, a particular set of stakes. The bat’s experience cannot be simulated because it is not a dataset. It is a process. The same is true of yours.

World models are a genuine advance. DeepMind’s Genie 3, released in August 2025, can generate interactive three-dimensional environments in real time, teaching itself the physics of how objects fall and collide without hard-coded rules. LeCun’s forthcoming architecture at AMI Labs aims to build AI systems that maintain persistent memory, understand causal structure, and can plan across time. These are serious attempts to address the weak form of language-plus — to give AI systems a richer model of physical reality that goes beyond token-prediction.

But they do not touch the strong form. A world model that perfectly simulates the physics of the bat’s environment still does not know what it is like to be the bat. The gap is not a gap in data or architectural sophistication. It is a gap between a system that models the world and a system that is in the world — finitely, irreversibly, with something to lose.

None of this is an argument that LLMs are unimportant, or that the transformation they represent is overstated. The opposite, if anything.

What LLMs can do — and this is genuinely consequential — is automate everything that can be written down, recorded, or expressed in audio-visual form. And that, it turns out, is an enormous proportion of what civilisation runs on. Human institutions — law, markets, politics, bureaucracy — do not run on tacit knowledge. They run on the language layer of cognition: documents, arguments, precedents, signals, messages, narratives. This is precisely the layer that LLMs simulate with extraordinary fidelity.

But to understand what this means — and why it matters as much as it does — we need to understand what kind of moment we are in. And that requires a longer view.

We are not experiencing a disruption to an otherwise stable civilizational order. We are at a threshold of the same kind as several prior thresholds — each of which transformed not just how information was handled but what kind of world became possible. The right word for such a threshold is singularity: not in the science-fiction sense of a machine intelligence that supersedes humanity, but in the mathematical sense of a point beyond which the prior trajectory cannot be extrapolated. Every major information revolution has been a singularity of this kind. We are at another one now.

When human beings first acquired linguistic ability, something fundamental changed. Language allowed us to form tribes, to coordinate across time and space, to accumulate knowledge beyond what any individual could hold. From language came the possibility of society. Then we learned to encode language — first in pictures on cave walls, then in symbols, then in alphabets. Writing did not merely record thought; it transformed thought. It made possible abstraction at scale, the transmission of ideas across generations, the emergence of law, theology, philosophy, and eventually science. Civilisation, in any meaningful sense, is a consequence of the ability to write things down.

It is worth remembering that this transition was not greeted with universal enthusiasm. Socrates distrusted writing. He argued, through Plato’s Phaedrus, that committing thought to text would weaken memory and produce the illusion of knowledge without its substance. He was not wrong about the risks. He was wrong about the trajectory. The very dialogues in which he articulated his distrust of writing survived only because his students wrote them down. Plato, who dramatized Socrates’ arguments against text, is one of the most consequential writers in history. The resistance to each new information technology is a recurring feature of civilizational progression. It is not evidence against the progression.

Modernity coincides with the spread of mass literacy, the printing press, mass media, and entirely new cultural forms: photography, cinema, recorded music. These were not merely new ways of doing old things. They created new ways of being human. New economic activities, new political movements, new art forms, new social identities. The photograph did not just record what a painting depicted; it changed what painting was for. Cinema did not just show stories; it restructured how stories were experienced. Recorded music did not just preserve performance; it separated music from the occasion of its performance entirely, making possible new relationships between sound and daily life that no one in the pre-modern era could have imagined.

At each of these inflections, what changed was not merely the efficiency of information handling. What changed was the layer of information processing that became automated or externalized — and with it, the scope of what human beings could do and become. Writing externalized memory. Print externalized distribution. Mass media externalized broadcast. Each externalization freed human attention for something new, and each created a world that was unintelligible from the vantage point of what preceded it.

We are now at the next inflection — and it is the most radical yet. What is being automated is not memory, or distribution, or broadcast. It is the generation and processing of language itself: the capacity to produce, transform, and respond to text, image, and sound at a scale and fidelity that exceeds any individual human. This is the layer on which all prior civilizational structures were built. Automating it does not disrupt civilization. It changes the substrate on which civilization runs.

This is why the word singularity is apt, if stripped of its science-fiction connotations. We cannot extrapolate from here. The world that becomes possible on the other side of this threshold is not imaginable from inside the world we currently inhabit — for the same reason that the life of a literate person was not imaginable from inside an oral culture. Not because it is worse. Because it operates simply at a different dimension.

There will be new challenges. There will be serious and unresolved ethical debates about authorship, about labour, about the distribution of the gains, about what it means to know something in a world where knowing can be outsourced. These debates are necessary and will not resolve cleanly. But the frame of “is this good or bad” misses the more important question: this is happening, and those who imagine what comes next will be the ones who shape it.

Aldous Huxley called his vision of a technologically administered future a “brave new world” — borrowed, with full irony, from Miranda’s line in The Tempest, spoken by someone who has never left an island and mistakes novelty for wonder. Some will read what is coming as Huxley’s warning: a world of abundance that has traded away something essential. Others will read it as Miranda’s genuine astonishment: a world genuinely new, genuinely strange, genuinely full of possibility. Both readings are available. What is not available is the option of finding it familiar.

It is not post-human. It is post-modern. The difference matters.

What we cannot tell, we still cannot automate. But we should automate everything we can articulate — and doing so will free us, as every previous information revolution has freed us, to discover what we did not know we could not yet say.

As LLMs become integrated into the language layer of our most consequential institutions, the decisions that emerge from those institutions will increasingly bear the signature of a system that has no biological stake in its own outputs. The judgment that a law should be interpreted this way rather than that way, or that a market signal means this rather than that — these will be shaped, in part, by a simulation of human language rather than by the situated judgment of a person who lives with the consequences.

This is not the same as a post-human apocalypse. The humans are still there. They are still in the loop, in some formal sense. But there is a question about what it means for institutions to progressively route their language through a system that cannot, in principle, know what it is like to be the person who will be governed by the law, or priced by the market. The system may produce outputs that are indistinguishable from those a human expert would produce. It cannot produce the thing that a human expert also produces, which is accountability — the lived exposure to the consequences of being wrong.

Polanyi’s Paradox was always recursive. We discover what we cannot articulate by trying to articulate it and finding that something escapes. The AI moment is forcing that recursion at civilizational scale. Each new capability reveals a new layer of what we did not know we were doing. What world models will reveal, I suspect, is not the solution to the problem of tacit knowledge but its next articulation.

As the old saying goes, the map is not the territory. The map is getting better. More than that: we can now navigate the map automatically. For most of human history, map-making was a rare and exalted intellectual skill — to be a cartographer was to be among the most educated people of your age. Now we have vehicles that navigate terrain without a driver, routing themselves through cities their designers never visited. This does not impoverish us. It frees us. Not from the need to go somewhere, but from the requirement that going somewhere must consume our full attention. The map becomes infrastructure, and what we do with the journey changes entirely.

The territory, as always, remains ahead.

A Note on Co-Authorship

This essay was written in conversation with Claude, a large language model developed by Anthropic. The reader may find a certain irony in that — and is invited to sit with it rather than resolve it quickly.

The core thesis was mine throughout: that tacit knowledge is best understood as language-plus, that LLMs are simulations of the language layer of cognition rather than of cognition itself, and that Nagel’s bat offers the clearest illustration of what any such simulation must ultimately leave out. The civilizational framing — the arc from linguistic ability to writing to modernity to this inflection point, the Socrates-against-writing irony, the insistence that this is a singularity in the precise sense and not a catastrophe, and that those who imagine what comes next will drive it — was also mine. Claude stress-tested these positions through several rounds of devil’s advocacy, surfaced and organized the relevant literature, proposed the Huxley and Miranda references as literary anchors, and drafted the prose throughout. The revisions, the additions, the tonal judgments, and every position taken were mine.

After the first complete draft was written, I submitted it for critique. The two reviews I received were generated by other large language models (ChatGPT and Gemini). I include this not as a footnote but as an illustration — and as an instance of the essay’s own argument about what the language layer can and cannot do.

Both critiques were analytically competent. They identified the same philosophical vulnerability — the substrate essentialism problem — organized the relevant objections clearly, and offered structurally sound revision priorities. One critique suggested the institutional argument was the most consequential part of the essay. The other called it the most urgent. Both were right. Neither took a position. Neither said whether the bat argument was ultimately convincing, or whether the worry about stake-less institutions was overstated or understated. Neither had any stake in whether the argument mattered — only in whether it was consistent. That is precisely the asymmetry the essay is arguing for: the language layer functioning at a high level, in the absence of the biological ground that makes a judgment more than an analysis.

There is a further self-critical note I want to make explicit. By routing the critique of this essay through large language models, I performed exactly the institutional act the essay’s final section warns against. I externalized editorial judgment to systems with no stake in the outcome. The critiques were useful. They were not accountable. Whether that distinction mattered for this particular essay — a philosophical piece rather than a law, a market decision, or a political judgment — the reader can decide. But the act itself is an illustration of how naturally and frictionlessly the substitution happens, even when the person performing it is aware of the argument against it.

This division of labour maps onto the essay’s argument precisely. I brought the thesis, the framing, the positions, and the judgment about what matters. Claude and the reviewing models contributed breadth — processing relevant literature, generating structural options, identifying argumentative weak points, drafting fluently across a sustained argument. What none of them contributed was the thing the essay is about: the knowledge of why these questions matter to a particular kind of person living in a particular kind of world, with a particular kind of stake in the answers.

The Socratic irony, again, is not lost on me. Socrates argued that writing weakened thought. We know this because Plato wrote it down. I have argued that the automation of the language layer will not diminish what is most distinctively human. I have done so, in part, by automating the language layer — and then by routing the critique of that argument through more automation. Whether that counts as evidence for the thesis or against it, I leave to the reader. What I am confident of is this: the reader is a biological organism with a stake in the answer. That is not nothing. That, in fact, is everything the essay is about.



This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit unsubject.substack.com/subscribe
...more
View all episodesView all episodes
Download on the App Store

Unsubject by SimonBy Simon Lee