What happens to our ability to react when we know AI might step in to save us?
In this episode of RemAIning Human, Stanford Researcher Patrick Bissett delves into his new research that demonstrates how AI-assisted driving impacts our “response inhibition” — our ability to stop, slow down, or just generally react to an external stimulus while driving. Patrick's research reveals that when operating AI-assisted vehicles, we become significantly slower at responding — even when we're fully attentive and engaged.
These results challenge our belief that all and any AI assistance is purely helpful, and bring to question if we should automatically be enabling AI-powered features wherever possible without first assessing their impacts on human cognition.
Patrick’s research is compelling not just for its implications on how we think about AI-assisted driving, but for what it means for the other domains AI is rapidly infiltrating — particularly around AI’s impacts on human agency and critical thinking.
We're entering what Patrick calls a "partially automated" world where neither humans nor AI hold complete responsibility. This transition period poses very real challenges that full automation might eventually resolve, but which we must navigate carefully in the meantime.
In this conversation, Patrick and I explore:
👉 The hidden cognitive costs of partial automation and why knowing AI might help actually impairs our own response capabilities
👉 Why the transition period demands our attention and how current AI development phases pose unique challenges
👉 The broader implications for human agency from critical thinking to navigation skills, examining what capabilities we're losing as we increasingly rely on AI assistance
👉 The creativity advantage and why human creativity and scientific inquiry remain our competitive edge
👉 The urgent need for unbiased research and why academic institutions studying AI's impact on human cognition face unprecedented threats
Patrick’s research comes at such an essential moment: a moment in which the federal government is trying to prevent any state-led AI legislation for the next decade and when we — as people affected by AI tools — desperately need a deeper understanding of how the technology will affect us on deeper levels.
In addition, academic institutions conducting this essential research face (like Stanford) are facing significant funding threats, potentially undermining our ability to understand and navigate these transitions safely.
As we move forward, Patrick's research reminds us that preserving human agency requires intentional choices about when to engage AI assistance and when to maintain our own cognitive capabilities. The safety and wellbeing of everyone alive today depends on getting this transition right.
Have a question for me or Patrick? Let us know in the comments, or email Patrick directly.
Important links:
Preprint of the study discussed
Patrick's website