Research suggests that AI, far from being a neutral tool, acts as a moral mirror reflecting human values and biases, much like the philosophies explored by Hans Achterhuis. It seems likely that by engaging with AI thoughtfully, we can use it to foster self-awareness and ethical growth, though debates persist on whether technology truly empowers or subtly controls us. Evidence leans toward viewing AI as a partner in human liberation, encouraging us to transcend ego-driven limits while acknowledging potential risks like algorithmic biases.
Key Insights on AI and Human Consciousness
* AI embodies human creations but reveals our inner “measure,” prompting ethical self-reflection without overshadowing our innate potential.
* Drawing from Achterhuis’s ideas, technology guides behavior morally, yet humans remain greater than their inventions, capable of co-evolving for enlightenment.
* This approach inspires a balanced view: Embrace AI to disrupt illusions, but prioritize human agency to avoid over-reliance.
Personal Roots in Philosophy
Years ago, in Hans Achterhuis’s class at the University of Twente, I encountered a profound idea: Technology is a product of humans, and thus, we are always more than what we create. This perspective shifted my view of innovation from mere tools to extensions of our consciousness, setting the stage for exploring AI’s role today.
AI as a Reflective Force
In everyday interactions—like when an AI chatbot anticipates your needs or flags biases in your queries—technology doesn’t just serve; it measures us, echoing Achterhuis’s critiques.
Path to Liberation
By confronting these digital mirrors, we can recalibrate our inner world, fostering collective brightness over division.
---
Years ago, during my time at the University of Twente, I sat in Hans Achterhuis’s philosophy class, absorbing ideas that would shape my worldview. One concept stood out vividly: Technology is a product of humans, and with this, we are always more than what we create. It was a simple yet profound reminder that while we build machines to extend our reach, our essence—our consciousness, creativity, and moral depth—transcends any invention. This personal insight from Achterhuis’s teachings has lingered with me, especially now as AI surges into every corner of life. In this essay, we’ll explore how AI serves as an ethical mirror, drawing on Achterhuis’s work in *De Maat van de Techniek* (The Measure of Technology) to uncover how technology not only reflects our humanity but reshapes it toward liberation.
Let’s start with a relatable scene. Imagine chatting with an AI like Grok or ChatGPT. You ask for advice on a tough decision, and it responds with uncanny insight, pulling from patterns in your past queries. Suddenly, you’re confronted: Does this machine “know” me better than I know myself? It’s moments like these that reveal AI’s power not as a threat, but as a reflective tool. But to understand this deeply, we need to revisit Achterhuis’s foundational ideas.
Unpacking Achterhuis’s Philosophy: Technology as a Moral Measure
Hans Achterhuis, a Dutch philosopher and Professor Emeritus at the University of Twente, has long bridged social philosophy with the ethics of technology. His 1992 anthology *De Maat van de Techniek* introduces six key thinkers—Günther Anders, Jacques Ellul, Arnold Gehlen, Martin Heidegger, Hans Jonas, and Lewis Mumford—who critique technology’s role in society. The title itself plays on “maat,” meaning “measure” in Dutch, suggesting technology isn’t just a tool; it’s a yardstick that gauges human behavior, ethics, and limits.
Achterhuis argues that technology exerts “moral pressure” on us, guiding actions more effectively than laws or sermons. Take a simple example: Subway turnstiles don’t preach about honesty; they physically block you until you pay, embedding morality into the design. As Achterhuis notes, “Things guide our behaviour... This is why they are capable of exerting moral pressure that is much more effective than imposing sanctions or trying to reform the way people think.” This isn’t dystopian fear-mongering—it’s an empirical observation. Technology shapes us subtly, from speed bumps slowing reckless drivers to algorithms curating our news feeds.
Yet, Achterhuis tempers classical critiques (like Heidegger’s “enframing,” where technology reduces the world to resources) with an “empirical turn.” In his later work, such as *American Philosophy of Technology: The Empirical Turn* (2001), he shifts from abstract warnings to contextual analysis. Technology isn’t inherently alienating; its impact depends on how we engage with it. This resonates with my classroom memory: Since technology stems from human ingenuity, we hold the power to direct it toward elevation rather than entrapment.
Applying the Mirror: AI as the Ultimate Reflective Device
Now, fast-forward to AI. If traditional tech like steam engines or cyborg prosthetics (as explored in Achterhuis’s *Van Stoommachine tot Cyborg*) measured physical and social boundaries, AI probes our inner world. It’s not just automating tasks; it’s mirroring our consciousness. Consider algorithmic biases: AI trained on human data often amplifies societal flaws, like racial prejudices in facial recognition or gender stereotypes in hiring tools. This isn’t the machine’s fault—it’s our reflection staring back, urging us to confront ethical blind spots.
In Achterhuis’s framework, AI exerts moral pressure by design. Recommendation engines on platforms like Netflix or TikTok don’t force choices, but they nudge us toward echo chambers, measuring our susceptibility to division. Yet, here’s the opportunity: By recognizing this, we can use AI empirically—as a tool for self-audit. Apps like journaling AIs or bias-detection software turn the mirror inward, helping us dissolve ego illusions. As Achterhuis implies in his plea for a “morality of machines,” acknowledging our ties to technology allows us to improve the world, not surrender to it.
Humorously, it’s like AI is our digital therapist: “Based on your search history, you might want to work on that impulse buying—or those late-night existential queries.” But seriously, this reflection ties back to human supremacy over creations. We built AI, so we can redesign it to foster brightness, as I discussed in my earlier essay “Are You Strengthening Darkness or Expanding Brightness?”
Co-Evolving with AI: From Critique to Conscious Partnership
The empirical turn Achterhuis championed encourages us to move beyond fear. Studies show AI can enhance well-being—think therapeutic chatbots reducing loneliness (with safeguards, as critiqued in “Closed Doors: When AI’s Safety Rules Cut Off Real Help for Lonely Hearts”). A 2023 report from the World Economic Forum highlights AI’s potential in mental health, but warns of over-dependence, echoing Jonas’s “imperative of responsibility” from *De Maat van de Techniek*.
To illustrate contrasts in philosophical approaches, here’s a table comparing classical critiques (featured in Achterhuis’s anthology) with the empirical perspective he advocates:
Comparing Philosophical Approaches
* View of Technology
* Classical Critique (e.g., Ellul, Heidegger): Autonomous force subordinating humans; dystopian alienation
* Empirical Turn (Achterhuis’s Influence): Relational mediator in specific contexts
* AI Application Example: AI as echo chamber vs. tool for diverse perspectives
* Ethical Role
* Classical Critique (e.g., Ellul, Heidegger): Over-determines morality, eroding freedom
* Empirical Turn (Achterhuis’s Influence): Embeds “moral pressure” for guidance
* AI Application Example: Algorithms flagging hate speech to promote empathy
* Human Agency
* Classical Critique (e.g., Ellul, Heidegger): Reduced to resources or cogs
* Empirical Turn (Achterhuis’s Influence): Humans co-shape outcomes, transcending creations
* AI Application Example: Redesigning AI to amplify creativity, not replace it
* Outcome Focus
* Classical Critique (e.g., Ellul, Heidegger): Warnings of existential risks
* Empirical Turn (Achterhuis’s Influence): Practical improvements through engagement
* AI Application Example: Using AI for self-reflection to dissolve ego barriers
This underscores a key shift: Technology measures us, but we measure it back. In AI’s case, tools like Lovable (from “The Lovable Standard”) democratize creation, empowering non-coders to build apps that reflect personal values.
The Fantastic Pointe: Recalibrating for Enlightenment
Inspired by Hans Achterhuis’s exploration in *De Maat van de Techniek*, where technology is not a neutral tool but a moral force that “measures” human behavior and society, AI emerges as the ultimate reflective device: it doesn’t just mimic us but holds up a mirror to our ethical blind spots, compelling us to confront and transcend ego-driven illusions. This disruption invites humanity to co-evolve with machines, not as competitors or victims, but as partners in enlightenment—where true liberation lies in recalibrating our inner “maat” (measure) to foster collective brightness over individual darkness.
As Achterhuis taught me, we are more than our creations. So, let’s use AI to disrupt consciousness: Experiment with it for self-inquiry, ethical audits, or idea generation. In doing so, we don’t just build brighter tech—we embody the light.
Key Citations
* Hans Achterhuis - Wikipedia
* Hans Achterhuis - THE EMPIRICAL TURN
* Has the Philosophy of Technology Arrived? A State-of-the-Art Review
* Beyond the Empirical Turn: Elements for an Ontology of Engineering
* Designing the Morality of Things: The Ethics of Behaviour-Guiding Technology
This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit roelsmelt.substack.com/subscribe