Thought Experiments with Kush

Brain Short-Circuiting


Listen Later

The Pattern We Should Have Seen Coming

Our ancestors consumed somewhere between 30 teaspoons and 6 pounds of sugar annually, depending on their environment. Today, Americans average 22-32 teaspoons daily—roughly 100 pounds per year. This isn’t a failure of willpower. It’s the predictable result of engineering foods that trigger evolutionary reward systems more intensely than anything in nature ever could.

The food industry discovered how to short-circuit the biological mechanisms that kept us alive for millennia. Our brains evolved to crave sweetness because calories were scarce and obtaining them required real effort. That drive made perfect sense when finding honey meant risking bee stings and climbing trees. It makes considerably less sense when a vending machine dispenses 400 calories for a dollar.

We’ve seen this movie before. Multiple times. And we’re watching it again, right now, with artificial intelligence and human cognition.

The difference is that we’re living through this mismatch in real-time, conducting an uncontrolled experiment on human intelligence at population scale. The stakes are higher, the effects more subtle, and the window for conscious intervention rapidly closing. Within a generation, we may have millions of young people who never developed the cognitive capacities they’ve lost—because they never built them in the first place.

But here’s what makes this moment different from previous technological revolutions: we actually understand the mechanism. Neuroscience can now measure what happens when we outsource cognition. We can track attention degradation. We can document memory changes. We can quantify reasoning decline. And critically, we can identify the exact design choices that determine whether AI enhances or erodes human capability.

The central insight is deceptively simple: the same technology that can double learning outcomes can also devastate critical thinking, and everything depends on how we deploy it. This isn’t about choosing between technological progress and human flourishing. It’s about understanding evolutionary psychology well enough to achieve both.

The Anatomy of a Hijacking

Every major technological revolution follows a similar arc. We create systems that trigger evolutionary adaptations, producing outcomes that would have been advantageous in ancestral environments but prove harmful in modern contexts. The pattern is so consistent it’s almost boring—and yet we keep falling for it.

Consider fossil fuels. Over millions of years, ancient organic matter was compressed and transformed into concentrated energy reserves—coal, oil, natural gas. This process took geological time scales our minds cannot truly comprehend. Then, within the span of two centuries, we developed the technology to extract and burn these reserves, releasing in moments the energy that took eons to accumulate. We short-circuited time itself, compressing millions of years of stored sunlight into decades of explosive industrial growth. The benefits were immediate and transformative. The costs—climate disruption, ecological degradation, resource depletion—were deferred to future generations who had no voice in the transaction.

This temporal short-circuiting appears throughout technological history. Agriculture solved acute hunger but triggered our thrifty genes—the tendency to store excess energy as fat during times of abundance. This adaptation saved lives during famines. Now it drives a global obesity crisis. We collapsed the ancient cycle of scarcity and abundance into perpetual plenty, and our bodies responded exactly as evolution programmed them to.

Industrial food systems engineered supernormal stimuli: foods sweeter than any fruit, more caloric than any nut, more instantly rewarding than anything our ancestors encountered. Our bodies seek maximum calories for minimum effort. The problem isn’t us. It’s the mismatch between Paleolithic physiology and industrial food engineering.

Social media exploited our tribal psychology. We evolved in bands of 50-150 people where reputation was built through direct interaction. Now we perform for invisible audiences, comparing ourselves to millions of curated presentations while feeling increasingly isolated. The platforms are designed to maximize engagement by triggering social anxiety and status competition—adaptive responses to ancestral social dynamics that misfire catastrophically at internet scale.

Digital platforms fragmented our attention. Gloria Mark’s longitudinal research, tracking screen attention from 2004 to 2023, documents a 69% decline in attention duration: from 150 seconds in 2004 to just 47 seconds by 2021. After an interruption, returning to the original task requires an average of 25 minutes. This isn’t cognitive decline—it’s environmental design. Our attention capacity remains intact; our environments are deliberately structured to prevent sustained focus.

Each revolution shares common features. Scale exceeds what our psychology can process. Supernormal stimuli trigger our evolved responses more intensely than natural stimuli ever could. Benefits become immediate while costs defer to the future. And complexity overwhelms our intuitive cause-and-effect reasoning.

But the AI revolution is different in a crucial way: it short-circuits cognition itself. We’re not just exploiting peripheral drives like hunger or status-seeking. We’re outsourcing the core cognitive functions that define human intelligence—pattern recognition, reasoning, memory formation, creative synthesis. Every query delegated to an AI system, every decision automated by an algorithm, every creative task offloaded to generative models represents potential atrophy of irreplaceable capabilities.

Your Brain on AI: What the Neuroscience Actually Shows

The most sophisticated evidence comes from a 2025 study using electroencephalography to monitor 54 participants over four months. Researchers compared brain activity patterns across three groups: people using AI text generation, people using search engines, and people writing independently.

The results were stark. Large language model users showed the weakest brain connectivity patterns across all groups. When these participants later switched to writing independently, they exhibited reduced alpha and beta connectivity—patterns indicating cognitive under-engagement. Their brain activity scaled inversely with prior AI use: the more they had relied on AI assistance, the less neural activity they showed during independent work.

Most troublingly, 83% of AI users could not recall key points from essays they had completed minutes earlier. Not a single participant could accurately quote their own work.

This introduces the concept of cognitive debt: deferring mental effort in the short term creates compounding long-term costs that persist even after tool use ceases. Like technical debt in software development, cognitive shortcuts create maintenance costs that accumulate over time.

Beyond this specific study, meta-analysis of 15 studies examining 355 individuals with problematic technology use versus 363 controls found consistent reductions in gray matter in the dorsolateral prefrontal cortex, anterior cingulate cortex, and supplementary motor area—regions critical for executive function, cognitive control, and decision-making.

The hippocampus shows particular vulnerability. Groundbreaking longitudinal research tracked individuals over three years and established causation rather than mere correlation: GPS use didn’t attract people with poor navigation skills; GPS use caused spatial memory to deteriorate. Lifetime GPS experience correlated with worse spatial memory, reduced landmark encoding, and diminished cognitive mapping abilities.

The counterpoint demonstrates neuroplasticity in the opposite direction. London taxi drivers who spend years memorizing thousands of streets develop significantly larger posterior hippocampi compared to controls. A 2011 longitudinal study followed 79 aspiring taxi drivers for four years: those who successfully earned licenses showed hippocampal growth and improved memory performance, while those who failed showed no changes. This definitively proved that intensive spatial navigation training causes brain growth.

Remarkably, a 2024 study found that taxi drivers die at significantly lower rates from neurodegenerative disease—approximately 1% compared to 4% in the general population—suggesting that maintaining active spatial navigation throughout life provides neuroprotection.

The principle is clear: the same neuroplastic mechanisms that allow AI dependence to shrink cognitive capacity also allow deliberate cognitive training to enhance it. The question is which direction we’re moving.

The Astronaut’s Paradox: Why Resistance Matters

In the microgravity environment of the International Space Station, astronauts experience what might seem like liberation from one of Earth’s most constant burdens. Without gravity’s relentless pull, movement becomes effortless. Heavy objects float weightlessly. The physical strain that accompanies every terrestrial action simply disappears.

Yet this apparent freedom comes at a devastating biological cost. Without the constant resistance that gravity provides, astronauts lose 1-2% of their bone density per month—a rate roughly ten times faster than postmenopausal osteoporosis. Muscle mass atrophies rapidly, with some muscles losing up to 20% of their mass within two weeks. The heart, no longer working against gravity to pump blood upward, begins to weaken and shrink. Even the eyes change shape as fluid pressure shifts, causing vision problems that can persist long after return to Earth.

NASA’s solution is counterintuitive but essential: astronauts must exercise for approximately two hours every day using specialized equipment that simulates the resistance gravity would naturally provide. The Advanced Resistive Exercise Device uses vacuum cylinders to create up to 600 pounds of resistance. Astronauts run on treadmills while strapped down with bungee cords. They cycle on stationary bikes against calibrated resistance. They perform squats, deadlifts, and rows against loads their bodies would never naturally encounter in orbit.

This is not optional. It is survival. The price of accessing space—with all its scientific discoveries, technological advances, and expanded human horizons—is the deliberate, daily sacrifice of time and effort to maintain biological systems that evolved under gravity’s constant training load. Astronauts must artificially recreate the resistance that Earth provides for free.

The parallel to cognitive function in an AI-augmented world is profound. Our brains, like our muscles and bones, evolved under constant resistance. Every decision required mental effort. Every memory demanded encoding work. Every problem needed active reasoning. This cognitive load wasn’t a bug—it was the training stimulus that built and maintained our mental capabilities.

AI offers a kind of cognitive microgravity. Decisions can be outsourced. Memory becomes external. Reasoning is delegated to algorithms. The mental effort that shaped human intelligence across millennia suddenly becomes optional. And just as muscles atrophy in space, cognitive capabilities diminish when the resistance that built them disappears.

But here’s the crucial insight: astronauts don’t abandon space exploration because of its physiological costs. The scientific discoveries, the technological innovations, the expansion of human capability beyond our home planet—these achievements are worth the price of two hours of daily exercise. The solution isn’t to avoid space; it’s to maintain biological systems deliberately while accessing capabilities that wouldn’t otherwise be possible.

The same logic applies to AI. The question isn’t whether to use these powerful tools—that ship has sailed, and the capabilities are too valuable to abandon. The question is whether we’re willing to pay the price of cognitive maintenance: the deliberate, sometimes inconvenient practice of engaging our minds in effortful work even when AI could do it for us.

Astronaut Scott Kelly, after spending 340 days aboard the ISS, returned to Earth with vision changes, genetic shifts, and months of rehabilitation ahead. Asked whether the mission was worth it, he didn’t hesitate. The expansion of human knowledge and capability justified the personal cost. But he would never suggest that future astronauts skip their exercise protocols to save time.

We stand at a similar choice point. AI offers cognitive capabilities that expand what humans can accomplish—genuine augmentation of our mental reach. But accessing those capabilities while maintaining the cognitive functions that make us who we are requires deliberate resistance training for the mind. The astronaut’s two hours on the treadmill is our decision to navigate without GPS occasionally, to write drafts before consulting AI, to work through problems manually before checking algorithmic solutions.

The Reasoning Crisis Nobody’s Talking About

Perhaps most concerning is accumulating evidence of declining reasoning abilities correlated with AI tool adoption. A comprehensive 2025 study examined 666 participants across diverse age groups and found a strong negative correlation between frequent AI tool usage and critical thinking abilities (beta coefficient of -0.42). The relationship was mediated by cognitive offloading: people who delegate analytical reasoning to AI rather than engaging themselves suffer systematic impairment.

The effects were most pronounced in younger participants aged 17-25, who showed the highest AI dependence and lowest critical thinking scores. Higher education provided some protective effect but didn’t eliminate the relationship.

Another study of 319 knowledge workers found that higher confidence in generative AI was associated with less critical thinking, while participants self-reported reductions in cognitive effort when using AI assistance. A systematic review of 14 studies on AI dialogue systems in education found that approximately 69% of students exhibited increased intellectual laziness and 28% showed degraded decision-making abilities.

These aren’t abstract academic concerns. Students using large language models for writing and research showed reduced cognitive load but poorer reasoning and argumentation skills compared to traditional search methods. They focused on narrower sets of ideas, producing more biased and superficial analyses.

A longitudinal study tracking graduate students using AI writing tools over sustained periods identified three major negative effects. First, dependence led to reduced cognitive effort and creativity—students reported not thinking through ideas as thoroughly because AI processed them rapidly. Second, loss of personal writing style occurred as writing became formulaic and standardized. Third, over-reliance affected confidence and skill retention, with students describing forgetting basic capabilities and becoming unable to write confidently without AI assistance.

The pattern extends beyond students. Programmers who extensively use AI code generation tools show declining ability to debug without AI assistance, reduced capability to understand code architecture, and diminished algorithmic thinking. Medical students using AI diagnostic assistants demonstrate reduced capability to work through differential diagnoses systematically.

We may be in the early stages of a reasoning crisis analogous to the literacy crisis identified when reading comprehension scores began declining. Just as literacy requires active engagement with text rather than passive consumption, reasoning ability requires active engagement with logical problems rather than passive acceptance of AI-generated solutions.

The Augmentation Paradox: When Help Hurts and When It Helps

Here’s where the story gets interesting, because the evidence isn’t uniformly negative. A comprehensive meta-analysis examining 51 studies from late 2022 to early 2025 found that properly implemented AI produced large positive impacts on learning performance (effect size of 0.867). A randomized controlled trial demonstrated that AI tutors produced double the learning gains compared to traditional active learning methods, with students spending less time on task and achieving significantly higher scores.

These represent substantial, statistically robust effects suggesting properly designed AI can dramatically enhance learning efficiency. But the moderating factors prove critical. Effects were most stable at 4-8 week durations. Problem-based learning showed the strongest effects, while traditional instructional models showed weaker impacts. Course type mattered enormously, with strongest effects in skills development and moderate effects in STEM fields.

The negative evidence is equally compelling. A study of 494 students found AI usage negatively related to academic performance (beta coefficient of -0.104), with frequent users showing poorer grades and reduced independent problem-solving capabilities. Multiple studies documented that AI significantly reduced creative writing abilities, original thinking, and depth of analysis.

The same technology. Opposite outcomes. Everything depends on design and implementation.

The creativity research reveals this paradox most clearly. A 2024 study of 500 participants writing short stories under three conditions found that 88% of participants with AI access chose to use it, and their stories were rated as more creative, better written, and more enjoyable. The largest benefits accrued to less creative writers, demonstrating a leveling effect.

But the critical finding: AI-enabled stories were more similar to each other than human-only stories. Individual creativity increased while collective novelty decreased—a social dilemma where individuals benefit but collective innovation narrows. AI may help individuals produce better work while simultaneously reducing the diversity of human creative output at the population level.

A major 2024 meta-analysis examining 106 experiments found that on average, human-AI systems performed worse than the best of human alone or AI alone (effect size of -0.23). The critical moderator was task type: decision tasks showed negative synergy with performance losses, while creation tasks showed positive synergy with performance gains.

The pattern suggests that AI works best when augmenting human capability rather than replacing human judgment. When humans outperformed AI alone, collaboration created synergy. When AI outperformed humans alone, performance losses occurred—suggesting better performers are better at deciding when to trust AI versus their own judgment.

The Age Paradox: Technology as Medicine and Poison

The most definitive comparative research challenges simplistic narratives of technology harm. A massive 2025 meta-analysis examining over 400,000 adults (mean age approximately 69) across 57 longitudinal studies averaging 6 years found technology use associated with 58% reduced risk of cognitive impairment and 26% reduced time-dependent rates of cognitive decline. Effects remained significant after controlling for demographics, socioeconomic status, health, and cognitive reserve.

The proposed mechanism suggests technology engagement provides cognitive stimulation, social connectivity, and opportunities for continued learning—supporting a “technological reserve” hypothesis rather than digital dementia.

Yet younger populations show opposite patterns. Research comparing heavy versus light media multitaskers found heavy multitaskers performed significantly worse on sustained attention tasks, showed poorer ability to filter irrelevant information, and demonstrated reduced cognitive control. Studies found that children using digital tools more than two hours daily had lower cognitive test scores compared to lighter users.

The strongest causal evidence comes from digital detox experiments. A preregistered randomized controlled trial in 2025 blocked mobile internet for 467 participants over two weeks. Results showed improvements in sustained attention equivalent to reversing 10 years of age-related cognitive decline, measured objectively via standardized tasks. Effects on anxiety and depression were larger than typical pharmaceutical effects and comparable to therapeutic intervention outcomes.

Critically, even partial compliance showed benefits, and 91% of participants improved on at least one outcome measure. The mechanism: blocking mobile internet increased time socializing in person, exercising, spending time in nature, and improved social connectedness and self-control.

The evidence clearly demonstrates that outcomes depend on age, usage pattern, engagement type, and implementation design. Moderate, purposeful technology use by older adults provides cognitive benefits. Heavy, passive consumption by younger individuals impairs development. AI tools designed to augment human capability enhance learning. AI tools designed to replace human effort erode capacity.

The Design Principles That Make the Difference

Understanding what separates enhancement from erosion suggests clear principles for responsible AI deployment.

Human-in-the-Loop vs. AI-in-the-Loop: The critical distinction is whether humans retain decision-making authority or become rubber stamps for algorithmic outputs. Successful implementations include approval points before critical steps, editing capabilities to correct mistakes, reviewing tool calls before execution, and validating human input—maintaining transparency and human agency throughout.

Preserve Cognitive Struggle: The most successful educational AI implementations preserve the cognitive effort fundamental to learning. They handle initial content delivery and personalized pacing while maintaining engagement for higher-order skills. Success requires structured training, explicit learning objectives, appropriate scaffolding that gradually reduces support as competence develops, and continuous monitoring of outcomes.

Creation Over Decision: AI collaboration shows positive synergy in creation tasks but negative synergy in decision tasks. Using AI to generate initial drafts, explore possibilities, or handle routine components while humans direct creative vision and make final judgments produces better outcomes than delegating decision-making to algorithms.

Augment, Don’t Replace: The original vision of intelligence augmentation emphasized providing new operations and representations that users internalize as cognitive primitives, expanding the range of thoughts humans can think rather than outsourcing cognition entirely. Rather than outsourcing cognition, it is about changing the operations and representations we use to think; it is about changing the substrate of thought itself.

Scale to Psychology: Intentionally constrain systems to scales our psychology can handle. Social platforms that prioritize depth of connection over breadth. Notification systems that batch interruptions rather than create constant distraction. Content delivery that respects human attention spans rather than exploiting them.

Temporal Friction: Introduce deliberate friction at critical decision points. Make long-term consequences feel immediate. Require explicit consideration of future costs in present decisions. Design interfaces that slow down rather than accelerate beyond human biological timescales.

Practical Cognitive Hygiene for an AI Age

Individual practice matters as much as system design. Establishing routines analogous to dental hygiene or sleep hygiene can preserve cognitive capacity while leveraging AI capabilities.

Maintain Effortful Practice: Regularly engage in tasks that AI could handle but you choose to do yourself. Navigate without GPS occasionally. Write drafts before consulting AI. Work through problems manually before checking algorithmic solutions. Like physical fitness, cognitive capacity requires regular exercise and atrophies without use.

Strategic Offloading: Distinguish between beneficial offloading (reducing unnecessary friction while preserving cognitive engagement) and harmful offloading (bypassing effortful learning). Use AI for initial research and ideation but engage deeply with synthesis and critical evaluation. Let AI handle routine components while you focus on higher-order thinking.

Digital Sabbaticals: The evidence from detox experiments is compelling. Regular periods of complete digital disconnection—even brief ones—can reverse attention degradation and reduce anxiety. The benefits appear dose-dependent, with even partial reduction showing improvements.

Conscious Context-Switching: Protect sustained attention by batching interruptions, disabling notifications during deep work, and creating environments conducive to focus. The problem isn’t that we can’t concentrate; it’s that our environments prevent it.

Metacognitive Monitoring: Develop awareness of when you’re genuinely learning versus merely consuming. Notice the difference between AI-assisted work you deeply understand and AI-generated content you merely approve. Track which uses of AI expand your capability versus which create dependence.

Generational Boundaries: The age paradox suggests different approaches for different life stages. Younger people whose cognitive systems are still developing require more protection from replacement effects. Older adults may benefit from engagement that would prove harmful to developing brains. Context matters.

The Choice We’re Making Right Now

We stand at a genuine choice point. The same neuroplastic mechanisms that allow taxi drivers to grow their hippocampi also allow AI dependence to shrink critical thinking capacity. Whether AI becomes a tool for unprecedented human flourishing or an instrument of cognitive diminishment depends entirely on deliberate choices about design, deployment, regulation, and individual practice.

The science is remarkably clear. Properly designed AI augmentation can double learning outcomes. Digital detox can reverse a decade of attention decline. Technology use in older adults reduces dementia risk by 58%. Conversely, heavy AI dependence reduces critical thinking dramatically. Unguided AI use in education lowers academic performance. GPS dependence causes hippocampal atrophy.

The outcomes diverge completely based on how we design and deploy these technologies. This isn’t speculation. It’s measured, replicated, documented across dozens of studies with hundreds of thousands of participants.

The question is whether we will act on this knowledge before a generation grows up having never experienced sustained attention, spatial navigation without digital assistance, writing without AI augmentation, or problem-solving without algorithmic help—never knowing the cognitive capacities they’ve lost because they never developed them in the first place.

Social media showed us what happens when we scale social interaction beyond what tribal psychology can handle. We got an epidemic of anxiety, depression, and political polarization because we couldn’t resist maximizing engagement through manufactured outrage. We could have designed platforms that fostered genuine connection rather than parasocial performance. We largely didn’t.

Fossil fuels showed us what happens when we short-circuit geological time scales, extracting in decades what took millions of years to accumulate. We got unprecedented industrial growth—and an uncontrolled experiment on planetary climate systems with our children’s futures as the stakes. We could have developed these resources more gradually, with greater consideration for long-term consequences. We didn’t.

The AI revolution offers something previous revolutions didn’t: advance warning. We understand the mechanism. We can measure the effects in real-time. We know exactly which design choices lead to enhancement versus erosion. We have working examples of augmentation that expands human capability rather than replacing it.

Astronauts don’t avoid space because of its physiological costs—they maintain their bodies deliberately while accessing capabilities that wouldn’t otherwise be possible. The cognitive equivalent is clear: we shouldn’t avoid AI because of its risks to mental function. We should maintain our minds deliberately while accessing capabilities that expand human potential beyond anything previously imaginable.

The great hijacking of our evolutionary systems need not be our final chapter. It could instead be the catalyst for a new kind of progress—conscious, directed, and wise. We can design technologies that work with human nature rather than exploit it. We can preserve cognitive capacities while leveraging AI capabilities. We can choose augmentation over replacement, enhancement over diminishment, wisdom over expedience.

Unlike our evolutionary heritage, this choice is ours to make. The science provides clear guidance. The question is whether we have the collective wisdom and institutional capacity to follow it before the window closes.

AI is hijacking our cognition. But unlike previous hijackings, we can see it happening. We understand how it works. And we know what to do about it.

The only question is whether we will.



This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit thekush.substack.com
...more
View all episodesView all episodes
Download on the App Store

Thought Experiments with KushBy Technology, curiosity, progress and being human.