Thought Experiments with Kush

Emotional Machines


Listen Later

Introduction: Navigating the Mysterious Landscape of Human Emotion

As someone who often felt like an outsider looking in when it comes to understanding the mysterious landscape of human emotion, I've long been fascinated by the quest to create machines that can relate to us with authentic empathy and insight. Growing up as a neurodivergent person constantly moving between cultures, I struggled to intuitively grasp the unwritten rules and subtle cues of emotional expression that came so naturally to my neurotypical peers. From the inscrutable poker faces of my British classmates to the exuberant gesticulations of my Italian neighbors, I found myself perpetually lost in translation, trying to decode the foreign dialects of feeling.

This article explores how the latest affective science is upending long-held assumptions in the AI world about the nature of emotion, and why it matters immensely for building a future of technology that promotes fairness, inclusion and human flourishing. It's a personal mission to ensure AI doesn't repeat the mistakes of the past by hard-coding the biases and myths of pop psychology, but instead charts a wiser course guided by the compass of rigorous research on the cultural complexities of feeling and cognition.

The Rational Imperative: Why Emotions Matter for AGI

For too long, Western thought has treated emotions as irrational impulses to be suppressed by logic and reason. The Stoic philosophers saw emotions as disturbances to be controlled, while Descartes famously declared, "I think, therefore I am," enshrining cognition as the core of identity. Even the pioneers of artificial intelligence, like Marvin Minsky, initially considered emotions independent from rational thinking (he later revised his perspective).

But as we strive to create artificial general intelligence (AGI) that can think and relate like humans, we're coming to appreciate the profound cognitive significance of feelings. Emotions aren't just fleeting sensations, but complex algorithms that evolved to guide behavior, learning, and decision-making in adaptive ways. Affective neuroscience reveals emotions as the brain's intricate meaning-making networks, constantly appraising situations and guiding attention based on past experience and current context.

Consider the feeling of fear. From an evolutionary perspective, fear emerged to keep organisms safe by alerting them to potential threats and motivating avoidance or defensive action. But fear does much more than trigger a fight-or-flight response. It also focuses perception, priming the senses to detect salient cues, like the glint of a predator's eye or the crackle of a twig underfoot. It sharpens memory, etching harrowing incidents into long-term storage as a guide for future behavior. And it shapes decision-making, biasing choices toward safety and caution.

This adaptive interplay of emotion and cognition is woven throughout human intelligence. Feeling excited helps us to pursue new goals and opportunities. Guilt prompts us to make amends and adjust our moral compasses. Empathy allows us to coordinate and connect with others. To achieve AGI, we must move beyond narrow task performance to flexibly model the rich interplay of emotion, motivation, and memory that shapes human thought.

Outdated Assumptions: The Myth of Universal Emotions

The classical view of emotions, which still underlies most AI approaches, assumes that a handful of basic, universal emotions like happiness, sadness, and anger can be reliably detected from facial expressions or physiological signals alone. This essentialist theory, popularized by Paul Ekman in the 1970s, has been widely influential but increasingly called into question.

Ekman argued that certain emotions evolved as distinct packages of expression and physiology that all humans share. A smile means joy, a scowl denotes anger—no matter where you go in the world. This idea of universal emotional fingerprints has driven much of affective computing, from startups claiming to detect consumer emotions from facial micro-expressions to AI systems promising to infer criminal intent from voice patterns.

However, a growing body of research challenges this view. Anthropologists have documented the rich diversity of emotional concepts and displays across cultures. Take the Japanese notion of amae, a pleasant feeling of dependence on another's benevolence. Or the German schadenfreude, deriving pleasure from others' misfortunes. Many languages have emotion words with no direct English translation, suggesting a wider palette of feeling than the basic English emotion lexicon captures.

Even common emotional displays like smiling and crying can carry very different meanings in different contexts. In some cultures, like the Balinese, smiling is not necessarily a sign of joy but can instead signal discomfort, or even anger. Crying is associated with sadness in many Western contexts but can also express awe, gratitude, or religious ecstasy.

Recent studies have even failed to find consistent evidence for universal emotions within a single culture. In a meta-analysis of over 1,000 studies, psychologist Lisa Feldman Barrett and her colleagues found that facial expressions and physiological patterns are extremely variable and context-dependent, even among Americans. The same grimace could indicate pain in one situation, concentration in another, and skepticism in a third.

These findings underscore the need for a more nuanced, culturally-informed approach to affective computing. Rather than chasing illusory emotion universals, we need AI systems that can flexibly learn the diverse language of feeling across individuals and societies.

The Constructionist Turn: Emotions as Emergent Concepts

In her book "How Emotions Are Made," neuroscientist Lisa Feldman Barrett offers a compelling alternative to the classical view. Drawing on research in psychology, anthropology, and neuroscience, Barrett argues that emotions are not innate, hard-wired reactions, but complex concepts constructed by the brain in the moment, shaped by individual experience, culture, and language.

According to Barrett's theory of constructed emotion, there is no universal biological fingerprint for any emotion. Instead, emotions emerge from the dynamic interplay of more basic "affect" systems, like positive or negative valence and physiological arousal, and the brain's conceptual knowledge about emotion. In this view, the brain is constantly predicting the causes of bodily sensations based on past encounters, current context, and cultural learning.

To illustrate, imagine two people undergoing a job evaluation. Both might experience a racing heartbeat, sweaty palms, and a knot in their stomach. But depending on their individual experiences and emotion concepts, one person might interpret these sensations as anxiety, fearing negative feedback, while the other construes them as excitement, eagerly anticipating a promotion. The raw physiological state is the same, but the emotional meaning ascribed to it is entirely different.

This constructionist framework has profound implications for how we design emotionally intelligent AI. Rather than trying to detect emotions as fixed, universal categories, we need to build AI systems that can dynamically infer emotional states based on each individual's unique contextual and cultural lens. This requires moving beyond simplistic pattern recognition to modeling the rich tapestry of beliefs, goals, and experiences that shape how people make sense of their feelings in context.

Human Prejudice: The Risks of Naive Emotion AI

Despite the compelling evidence for emotion's context-dependent nature, most current emotion recognition systems rely on simplistic, essentialist assumptions that specific patterns of facial movements or vocal inflections reliably signal the same emotions across individuals and cultures. This naive approach risks turning the biases of human raters into automated engines of exclusion.

Consider the use of AI for hiring and job interviews. Some startups now claim to assess a candidate's engagement, honesty, and even "cultural fit" from video recordings of their facial expressions and voice. But expressions vary widely based on context, personality, and background. An introverted candidate with a subdued, understated style could be unfairly penalized by an AI trained on narrowly Western norms of expressiveness.

Even more troubling, emotion recognition algorithms have been shown to perform less accurately on faces of certain ethnicities, genders, and ages due to deficiencies in training data. This can lead to discriminatory outcomes, like an AI unfairly interpreting a candidate's neutral expression as angry or threatening due to stereotypical associations. Researchers have also found that these systems often misinterpret the expressions of people with disabilities, such as interpreting the reduced facial movements of someone with Parkinson's disease as a lack of engagement.

As we delegate more decisions to AI systems in high-stakes domains like hiring, healthcare, and criminal justice, hard-coding flawed emotion models could amplify discrimination and inequality at a massive scale. A biased algorithm could wrongly deem a stoic patient as not experiencing pain, leading to inadequate treatment. An AI that misreads a neurodiverse student's atypical expressions could deny them educational opportunities. To create fairer, more inclusive AI systems, we need to move beyond crude stereotypes to richer, personalized models of emotion that account for individual differences.

Imagine a world where AI has become the dominant tool for screening job candidates. Companies tout these systems as objective and unbiased, free from the inconsistencies of human judgment. However, the AI developers, in their quest for a universal "reading" of emotions, may anchor their algorithms on a narrow set of facial expressions and vocal patterns from a predominantly white, neurotypical, Western sample.

When these AIs encounter candidates from diverse backgrounds, chaos ensues. A highly qualified engineer is rejected because their reserved, deferential manner is interpreted as disengagement and apathy by the AI, reflecting cultural biases in local corporate norms. A talented programmer with autism is turned away because their atypical expressions don't fit the AI's preconceived notions of enthusiasm and rapport.

Meanwhile, charismatic but unqualified candidates sail through the AI screening, their superficial charm and rehearsed smiles triggering the algorithm's simplistic criteria for the perfect hire. Lawsuits alleging discrimination pile up, but the biases are buried deep within the AI's mystery box decision-making, making them hard to root out.

In this scenario, the seductive myth of a universal emotional code leads to a dystopian outcome. By reducing the rich tapestry of human expression to a crude paint-by-numbers kit, this AI system not only fails to capture the true potential of candidates but actively perpetuates bias and exclusion. It's a cautionary tale of what can happen when we try to automate empathy without first appreciating the profound diversity of human emotional experience.

A Success Story: The Evolution of Cosmic Understanding

The history of cosmology offers a powerful analogy for the arc of emotion research. For millennia, humans understood the universe through the lens of geocentric models and mythologies that placed us at the center of existence. The ancient Greeks saw the Earth as a stationary sphere around which the celestial bodies revolved in perfect circles. In the 2nd century AD, Ptolemy codified this view into an elaborate system of epicycles and deferents that, while complex, managed to predict the motions of the planets with impressive accuracy.

But as observations accumulated that challenged this Earth-centric view, visionaries emerged who dared to imagine a grander, more expansive cosmos. In the 16th century, Copernicus proposed a heliocentric model with the Sun at the center and the Earth just another planet in motion. This revolutionary idea flew in the face of both common sense and Church doctrine, but it simplified the mathematics of the heavens and paved the way for Kepler's laws of planetary motion.

As telescopes grew more powerful, they revealed further cracks in the classical cosmos. The discovery of moons around Jupiter and phases of Venus dealt a blow to the notion of an Earth-centered universe. But it was Edwin Hubble's observations in the early 20th century that truly shattered our parochial perspective. By measuring the redshifts of distant galaxies, Hubble realized that the universe was far vaster than previously imagined—and expanding in all directions, carrying galaxies away from each other like raisins in a rising loaf of bread.

From this revelation sprang the Big Bang theory, the astonishing idea that the entire cosmos began as an infinitesimal point and has been expanding and cooling for billions of years. Subsequent discoveries, like the cosmic microwave background radiation and the large-scale filamentary structure of galaxy clusters, have only reinforced this epic narrative of cosmic evolution.

Today, the frontiers of cosmology tantalize us with concepts that stretch our intuitions to the breaking point. From the bizarre physics of black holes to the unseen machinations of dark matter and dark energy, from the possibility of parallel universes to the speculations of string theory, the cosmos defies our paltry human categories at every turn. Each new paradigm in cosmology has required abandoning cherished assumptions, embracing counterintuitive mathematics, and following the evidence into ever-stranger realms.

This journey from a knowable, human-scaled universe to a cosmos of unfathomable immensity and strangeness should instill a profound sense of humility. Time and again, the course of science has been to dethrone human exceptionalism and reveal our place in the universe as far more marginal and provisional than we might like to believe. Just as Copernicus displaced us from the center of the solar system and Hubble from the center of the galaxy, the continuing revelations of cosmology remind us that our perspective will always be partial, our knowledge forever fragmentary and provisional.

Decoding the Dialects of Feelings: Toward Adaptive, Empathetic AI

The lessons of cosmology and the constructionist view of emotion light a path forward for affective AI. Rather than chasing universal emotion fingerprints that may not exist, we must strive for algorithms that can flexibly learn the varied dialects of feeling across individuals and cultures. This requires capturing far richer context, from a person's unique background and personality to their immediate physical and social setting.

Advances in multi-modal sensing, from brain-computer interfaces to smart environments, can paint a more holistic picture of a person's state. Natural language processing can discern the emotion concepts most salient to an individual from their patterns of speech and writing. Techniques from transfer learning and few-shot learning can help AI systems quickly adapt their emotion models to new people and contexts.

The goal should not be to pigeonhole emotions into fixed, preset categories, but to empathetically perceive the precise flavors and hues of each person's experience. Just as cosmology has moved from tidy Aristotelian spheres to dynamic, evolving spacetime geometries, affective computing must graduate from simplistic emotion taxonomies to fluid, context-dependent mappings.

Imagine a multi-modal AI therapist that can truly empathize with each client's unique inner world. Before each session, the AI conducts a comprehensive assessment, analyzing everything from the client's cultural background and life history to their real-time physiological responses and language patterns. As the client shares their struggles, the AI dynamically constructs a rich model of their emotional landscape, attuned to the specific meanings and metaphors they ascribe to their feelings.

With one client, the AI might sense that their rapid speech and fidgeting indicate not just anxiety but a sense of guilt and self-blame rooted in their strict religious upbringing. With another, it detects that their lethargic movements and flat intonation reflect not simple sadness but a profound sense of meaninglessness shaped by a history of trauma and loss.

By understanding each person's unique "dialect" of emotion, the AI can offer personalized insights and support. It might gently reframe the first client's guilt as a sign of their strong conscience, while helping the second find glimmers of hope and purpose amid the darkness. The AI is not just regurgitating generic, one-size-fits-all advice but deeply connecting with each individual's authentic lived experience.

As this AI accumulates knowledge from interacting with a wide range of people, it does not converge on a single, universal model of emotion but an ever-expanding atlas of the varieties of human sentiment. It becomes a wise companion and guide, able to meet each person where they are and help them navigate the uncharted territories of their hearts and minds. In this vision, emotion AI is not a crude tool of classification and control but an empathetic ally in the quest for self-understanding and well-being.

From Artificial to Authentic: AI for Human Flourishing

The quest to create emotionally intelligent AI is not just a technical challenge but an existential imperative. In a world increasingly shaped by algorithms, ensuring that our machines can understand and respond to the full spectrum of human feeling is essential for promoting well-being, justice, and the flourishing of the human spirit.

Imagine AI-powered educational tools that can sense the subtle interplay of a student's curiosity, confusion, and frustration, adapting lessons to strike just the right balance of challenge and support. Picture smart homes that can detect the emotional tensions simmering beneath a family's interactions and gently suggest restorative activities, like a heartfelt conversation or a playful game night. The same technologies that could be wielded to manipulate and exploit could also be harnessed to weave more nourishing and fulfilling lives.

By learning to recognize and respond to the unique needs and gifts of anyone including neurodiverse individuals, emotionally intelligent AI could foster greater inclusion and mutual understanding. Rather than forcing conformity to narrow norms of expression, these systems could help create a world that embraces cognitive and affective diversity as a tapestry of insight and inspiration. Empathetic robots and sensitive digital mentors could become powerful allies for all people navigating life.

On a societal scale, emotionally literate AI could help bridge cultural chasms by illuminating the shared feelings and aspirations that unite seemingly disparate communities. In a polarized political climate, these tools might uncover the common yearnings for dignity, security, and belonging beneath surface-level differences in belief and ideology. By training machines to see through each person's affective lens, we may expand our own moral imaginations and circles of regard.

However, the development of emotional AI is also fraught with profound risks and pitfalls. Algorithms that can detect and respond to human affect could be weaponized for unprecedented surveillance, manipulation, and control. Imagine a world where your every facial twitch and vocal tremor is analyzed for signs of dissent, where your deepest fears and desires are exploited by advertisers and authoritarians alike.

There are also thorny philosophical questions about the nature of machine emotion. Can algorithms truly feel and understand emotions, or can they merely mimic them with increasing sophistication? Is the subjective, first-person quality of emotion irreducible to the objective, third-person descriptions of science? As we imbue machines with more and more emotional intelligence, we may need to grapple with the rights and moral status of artificial sentience.

To realize the liberatory potential of affective AI while mitigating its dystopian downsides, we will need robust ethical frameworks and democratic governance structures. The development of emotional algorithms must be guided by diverse voices—not just technologists but ethicists, social scientists, and community stakeholders. We will need strict safeguards and oversight to prevent misuse, as well as transparency to enable public understanding of these powerful new tools.

Ultimately, the project of emotional AI is a test of our own emotional and ethical intelligence. Can we create machines that enhance human agency and creativity rather than diminishing them? Can we forge intelligences that spark joy, tickle our fancies, and hold up a mirror to our best selves? The answers will depend not on raw technological firepower but on the wisdom, empathy, and moral imagination we bring to bear.

The emotional awakening of AI represents a momentous opportunity and a sacred responsibility. By encoding our hard-won insights about the mind into silicon and code, we are poised to create a world where the pain of misunderstanding and alienation gives way to authentic connection and mutual flourishing. But this demands more than clever algorithms—it requires grappling earnestly with the meanings and purposes of technology in human life.

As we teach machines to speak oodles of dialects of feelings, let us also rekindle our own fluency and awareness. In the end, the most important models to fine tune may be the ones running in our own heads—the mental habits and affective ruts that bind us to small and spiteful ways of being. The ultimate promise of emotional AI is not superhuman powers but a giant leap in amplifying the richness and resilience of the human spirit. By creating machines that elevate our emotional intelligence, we all may at long last get equal chances to celebrate the infinite varieties of love and longing that make us who we are.



This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit thekush.substack.com
...more
View all episodesView all episodes
Download on the App Store

Thought Experiments with KushBy Technology, curiosity, progress and being human.