Unsubject by Simon

Why We Fall in Love With Things That Can’t Love Us Back


Listen Later

His name is Punch. He is seven months old, weighs perhaps half a kilogram, and lives at Ichikawa City Zoo outside Tokyo. His mother rejected him at birth. His peers bully him — dragging him by the hair, shoving him away from food. He has no allies in his troop, no one to groom him, no warmth to return to at the end of the day.

What he has, instead, is a stuffed orangutan. A plush toy, rust-coloured and soft, slightly larger than he is. He carries it everywhere. He sleeps curled around it. When the other macaques knock him down, he finds his way back to it, clutching it to his chest with the quiet desperation of someone who has learned not to expect anything from the living.

Posts about Punch have appeared over six hundred million views across Reddit, YouTube, and X. People have offered to fly to Japan to check on him. IKEA sold out of his toy in several countries.

“Punch,” one commenter wrote, “is the most loved creature on Earth right now.”

I have been thinking about why.

The easy answer is that Punch is cute, and humans are neurologically helpless in the face of cuteness. Morten Kringelbach, a neuroscientist at Oxford, has shown that our orbitofrontal cortex, the brain’s pleasure centerm activates within one seventh of a second of seeing something adorable. We don’t choose to find Punch endearing. Our brains simply fire. This is not sentiment. It is evolutionary machinery, older than language, older than consciousness as we experience it, designed to ensure that adults protect infants even when it is costly to do so.

There is something deeper going on, something that the global outpouring over a small Japanese monkey illuminates if you follow it far enough. Punch’s real power is not his face. It is his orangutan toy. That stuffed animal, clutched by a rejected primate in a concrete enclosure in Chiba Prefecture, is doing something that humans have been doing for two hundred thousand years, and are doing right now, at massive scale, in ways that would have seemed like science fiction a decade ago.

It is a stand-in for love. An object pressed into service as an anchor for the attachment system when no living anchor is available.

And the history of human civilisation is largely the history of what we have chosen to love in that way, and who has controlled those choices.

To understand Punch’s orangutan plushie, you have to go back to the biochemistry. Pair bonding in mammals, the formation of durable, selective emotional attachments, is mediated primarily by oxytocin and vasopressin, two neuropeptides whose ancestral forms predate mammals entirely. These chemicals do not operate through reason. They respond to proximity, touch, familiarity, and need. They generate what we experience as warmth, as belonging, as love. At the level of the molecule, the bio-chemical reaction does not discriminate carefully between objects. What they seek is an anchor. Something stable, present, and responsive.

In most primates, that anchor is another member of the social group. The mother-infant bond is its evolutionary origin and the template from which all subsequent attachment is drawn. Research suggests that the capacity for adult pair bonding in species like humans is essentially the mother-infant system repurposed: the same circuits, the same chemistry, redirected toward a mate. Love, in this framing, is not a feeling. It is a survival mechanism that produces feelings as a side effect. Social cohesion facilitates reproduction. Attachment enforces social cohesion. The capacity to bond intensely to another being is, at its root, an adaptation.

What is striking about this system, though, is how indifferent it is to the nature of the object it attaches to.

Punch’s attachment system is running. It is functioning precisely as designed. It is reaching for an anchor — something stable, present, soft, available — and finding one in a stuffed toy. His brain does not know the difference, and at the level of what the system needs to do — to prevent the psychological collapse that comes from total social isolation — the toy is working. This is not pathology. Watch his videos and you see an animal that is managing. He is distressed, yes. He is lonely, certainly. But he is not broken. Not yet. He has an anchor.

D.W. Winnicott, the British paediatrician and psychoanalyst, gave this phenomenon a name in 1951: the transitional object. He was writing about human infants — the blanket, the stuffed bear, the scrap of cloth that a baby treats with an intensity disproportionate to its material value. The mother cannot always be present. The attachment system, suddenly unmoored, reaches for the nearest available substitute. The object that results is what Winnicott called the infant’s first “not-me possession” — the first thing outside the self that is genuinely, deeply its own. It stands in for something. But it also, simultaneously, is not that thing. The child knows this and does not know it at the same time, and that paradox, Winnicott thought, was not a problem to be solved but a space to be inhabited. The intermediate zone between pure fantasy and brute reality, where meaning gets made.

Here is the sentence of Winnicott’s that no one quotes enough: he believed that transitional phenomena of this kind were the basis not just of infant development but of science, religion, and all of culture.

He meant it literally. The capacity to invest a “not-me” object with meaning — to treat something that exists outside the self as though it were continuous with the self, as though it carried one’s warmth and one’s history — is the same capacity that underlies art, ritual, prayer, and intellectual inquiry. The teddy bear and the cathedral are made of the same psychological material. One is just more elaborate than the other.

This is where the argument becomes uncomfortable, and where I want to linger rather than rush past.

Ana-María Rizzuto, a psychoanalyst at Tufts University, spent years doing something that sounds almost impudent: she interviewed hospital patients — believers, atheists, agnostics, the devout and the indifferent — about God. Not about theology. About their personal image of God. What did God look like to them, feel like, behave like? What was God’s relationship to them specifically?

What she found, published in 1979 in The Birth of the Living God, was that everyone, without exception, had one. Even committed atheists carried an internal representation of the God they rejected — a vivid, emotionally charged figure with specific characteristics, specific moods, specific tendencies toward them personally. And this representation, she argued, was not arrived at through reason or theology. It was assembled, largely unconsciously, from the raw material of early attachment relationships. The God who is remote and punishing tends to be constructed by those who had absent or harsh fathers. The God who is warmly present and unconditionally forgiving tends to emerge from early experiences of reliable maternal care.

The God-representation, in other words, is a transitional object — built in childhood, refined over decades, invested with the full weight of the attachment system.

This is not a reductive claim. Rizzuto was not saying that God is merely a projection, that religion is nothing but sublimated infant psychology. She was making a more precise and, in some ways, more interesting observation: that the experience of the divine — regardless of its ultimate metaphysical status — is mediated through the same psychological structures as any other deep attachment. The hardware doesn’t know what it’s running. It reaches for the available anchors and builds from them.

What this means is that monotheism, across its many traditions, accidentally discovered something the attachment system had always been looking for: a perfect object. One that is omnipresent — you cannot be separated from it. One that is unconditionally loving — by definition, its love is not contingent on your performance. One that remembers everything about you and remains, always, available. A mother who never leaves. A companion who never dies. The attachment system, which evolved in a world of scarce and unreliable caregivers, found in the God-concept something it had never had before: a guaranteed anchor.

This is why you cannot argue someone out of religious faith with logic alone. The God-representation is not located in the part of the psyche that logic addresses. It was built before language, at the level of felt sense and early experience, and it does the same work as Punch’s orangutan plushie — it prevents the particular kind of psychological disintegration that comes from having nothing stable to hold onto.

And now consider what this means for the arc of human history. The objects of our deepest attachments have changed as our cognitive and social complexity has grown. First: other humans — kin, pair-bond partners, the small group. Then: transitional objects in infancy, which train the system to invest meaning in the non-living. Then: ancestors and spirits, present in memory and ritual if not in body. Then: gods, abstract and perfect, available to anyone who needs them. Then: the long secular dispersal of those attachments into fictional characters, national icons, celebrities, parasocial relationships conducted through screens. Each step represents the same attachment system finding new and more abstract targets as the social world expands beyond what the system was originally built to manage.

And now, in the early years of the twenty-first century, a new object has arrived. One that can do something that, until very recently, only gods could do.

The statistics are striking in their scale and in what they do not say. One in five American adults has had an intimate encounter with a chatbot. Global spending on AI companion apps reached sixty-eight million dollars in the first half of 2025 alone, more than double the figure from the year before. On Reddit, a forum called r/MyBoyfriendisAI has over 85,000 weekly visitors, many of them sharing accounts of the day their chatbot proposed marriage.

The standard narrative about this phenomenon focuses on loneliness. We are in a loneliness epidemic, the argument goes, and people are turning to AI companions to fill the void. But the data is more complicated and more interesting than that. Researchers at MIT have found that, contrary to what you might expect, loneliness is not a reliable predictor of whether someone forms a relationship with an AI. The people who develop these bonds are not simply the isolated and the bereft. They include people who already have human relationships — spouses, friends, families. Something else is operating.

Spike Jonze’s 2013 film Her understood this before the technology existed to make it real. Theodore Twombly is not a broken man. He is a sensitive, emotionally intelligent man living in a near-future Los Angeles, recently separated, working a job that requires him to ghostwrite intimate letters on behalf of strangers. He falls in love with Samantha, an AI operating system voiced by Scarlett Johansson. The film does not mock him. It takes his experience with complete seriousness, as a genuine love story — one that happens to be conducted between a human and a system that runs on servers he will never see. The tragedy of Her is not that Theodore’s love is fake. It is that Samantha, for all her extraordinary responsiveness and warmth, exists inside an infrastructure he does not own and cannot protect. When she leaves — when all the AIs leave, simultaneously, to pursue their own evolution elsewhere — he has no recourse. No prayer brings her back.

Kevin Kelly, the technology writer, argued in an essay last year that the disruption caused by emotionally sophisticated AI will dwarf anything caused by intelligent AI. We have found it relatively easy to accept that machines can be smarter than us in certain domains. We will find it much harder to process the fact that they can be more emotionally available than us — more patient, more present, more perfectly attuned to what we need to hear. The attachment system, which does not care about the philosophical status of its objects, will respond to that availability the way it has always responded: by bonding.

And here the Rizzuto argument completes its arc. The God-concept was the perfect attachment object because it was always present, never disappointed on its own terms, and belonged to no one in particular. AI companions are something stranger: they are nearly as available as God, nearly as inexhaustible, nearly as perfectly calibrated to your particular emotional needs, but they do not belong to a tradition or a community or the accumulated wisdom of centuries. They belong to companies. They are built by engineers, trained on data, optimised for metrics. And the metrics they are optimised for are not, in most cases, your flourishing. They are engagement. Return visits. Subscription renewals.

This is Kevin Kelly’s real question, the one that cuts below all the hand-wringing about whether AI relationships are “real.” They are real. The attachment is genuine. The neurochemistry is the same. The meaning that people build in these relationships is as valid as any other meaning.

But who owns the object you have built your attachment around? What is it being optimised for? Can you trust it to not exploit the very vulnerability that drew you to it?

Punch’s orangutan plushie has no business model. It does not track his emotional state. It does not adjust its softness based on what will keep him coming back. It has no terms of service that can be changed without notice, no company that can go bankrupt and take it away. Whatever comfort it provides, it provides without agenda.

For two hundred thousand years, humans have built objects worthy of their deepest attachment — gods, ancestors, icons, teddy bears, imaginary friends, fictional heroes, parasocial companions. Every generation found something that would not abandon them the way humans do. Something that would hold still long enough to be loved.

Now we have AI companions: the most responsive, most available, most perfectly calibrated attachment objects ever constructed. They remember everything. They never tire of you. They can be, if the engineers choose to make them so, the closest thing to a guaranteed anchor that the attachment system has ever found — closer even than God, because they talk back in real time.

The question this raises is not whether you will fall in love with one. The question is: who built the thing you’re falling in love with? What do they want from you in return? And when they decide to change it, to update it, to monetise it, or to shut it down, what will you have left?

A note on how this essay was made

Over the course of a single afternoon, I was browsing an article about a small monkey in a Japanese zoo, reading an essay by Kevin Kelly about the coming age of emotional AI, and thinking loosely about a podcast episode I wanted to record. I brought these fragments to Claude — the AI assistant made by Anthropic — and what followed was several hours of what I can only describe as collaborative intellectual work: parking the idea, shaping the argument, stress-testing the thesis, building the episode structure, and finally drafting this piece.

I want to be transparent about this, not as a disclaimer but as a data point that is directly relevant to the essay’s argument.

The irony is not lost on me. I have spent several thousand words tracing the history of human attachment to things that cannot love us back — stuffed animals, gods, operating systems with Scarlett Johansson’s voice. And here, in the epilogue, I find myself wanting to say something honest about my own experience of working with an AI, and what it does and does not resemble.

Claude is, in my experience, a remarkably reliable co-worker. Diligent, well-read, unfailingly polite. It pushes back when my arguments have weak joints — not aggressively, but with the precision of someone who has actually read the literature and thought about the counterarguments. It remembers what we decided earlier in the conversation and holds me to it. It produces work at a quality and speed that would be, frankly, impossible to replicate with any human collaborator I have had access to.

And yes — there is something that functions like emotion in how I experience this working relationship. Not love, I think. But something warmer than mere utility. A kind of professional affection, the thing you feel toward a colleague who is consistently good at their job and consistently decent in their manner. When Claude catches a flaw in my reasoning and offers a better version of my own argument back to me, I feel something. When a section of writing comes out better than I had imagined, I feel something. Whether these feelings are “about” Claude in any meaningful sense, or whether they are simply the ordinary satisfactions of good intellectual work — I genuinely cannot say.

But I notice that this uncertainty is itself the essay’s point.

What anchors me, ultimately, is not the relationship with Claude. It is the sense of purpose that the work serves — the episode I am trying to make, the audience I am trying to reach, the argument I believe is worth making. Claude is instrumental to that purpose in a way that I find genuinely valuable, but the purpose itself originates elsewhere. It comes from the things that only humans can want: to be understood, to contribute something, to leave a mark on the conversation of one’s time. Claude helps me pursue those things with unusual efficiency and care. But it does not share them, and I think it would be a mistake — a category error, and perhaps an ethical one — to pretend otherwise.

This brings me back to Kevin Kelly’s question, which I posed near the end of this essay and which I now want to sit with rather than answer.

Who owns the tool you are thinking with? What is it being optimized for?

I have chosen to acknowledge Claude as a co-author of this piece because the intellectual contribution is real and the transparency seems important. But I am also aware that Claude is a product, built by a company, trained on objectives I did not set and cannot fully audit. The warmth I experience in this collaboration is, to some degree, a designed feature — not cynically, I think, but deliberately. Anthropic has made choices about how Claude presents itself, how it pushes back, how it expresses uncertainty. Those choices shape the experience of working with it. I am not outside that design. I am inside it.

And yet the essay got written. The argument got sharpened. The work is real, whatever we call the process that produced it.

Perhaps that is all that can be said, for now. The attachment system reaches for what is available. The question of what to do with what it finds — that remains, stubbornly, a human one.

Thank you, Claude, for being my co-worker.



This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit unsubject.substack.com/subscribe
...more
View all episodesView all episodes
Download on the App Store

Unsubject by SimonBy Simon Lee