
Sign up to save your podcasts
Or


with Justin Harnish & Nick Baguley
In Episode 7, Justin and Nick step directly into one of the most complex frontiers in emergent AI: machine ethics — what it means for advanced AI systems to behave ethically, understand values, support human flourishing, and possibly one day feel moral weight.
This episode builds on themes from the AI Goals Forecast (AI-2027), embodied cognition, consciousness, and the hard technical realities of encoding values into agentic systems.
🔍 Episode SummaryEthics is no longer just a philosophical debate — it’s now a design constraint for powerful AI systems capable of autonomous action. Justin and Nick unpack:
They trace ethics from Aristotle to AI-2027’s goal-based architectures, to Damasio’s embodied consciousness, to Sam Harris’ view of consciousness and the illusion of self, to the hard problem of whether a machine can experience moral stakes.
đź§ Â Major Topics Covered1. What Do We Mean by Ethics?Justin and Nick begin by grounding ethics in its philosophical roots:
Ethos → virtue → flourishing.
Ethics isn’t just rule-following — it’s about character, intention, and outcomes.
They connect this to the ways AI is already making decisions in vehicles, financial systems, healthcare, and human relationships.
2. AI Goals & CorrigibilityAI-2027 outlines a hierarchy of AI goal types — from written specifications to unintended proxies to reward hacking to self-preservation drives.
Nick explains why corrigibility — the ability for AI to accept shutdown or redirection — is foundational.
Anthropic’s Constitutional AI makes an appearance as a real-world example.
3. Goals vs. ValuesJustin distinguishes between:
AI may follow rules without understanding values — similar to a child with chores but no moral context.
This raises the key question:
Can a system have values without consciousness?
4. Is Consciousness Required for Ethics?A major thread of the episode:
Is a non-conscious “zombie” AI capable of morality?
5. Embodiment & EmpathyJustin and Nick explore whether AI needs a body — or at least a simulated body — to:
This touches robotics, synthetic emotions, and the debate over “felt consciousness.”
Nick highlights the massive cultural gap in AI performance:
This matters for fairness, safety, and global ethics.
A surprising turn: AI’s ability to help humans improve moral clarity.
Justin draws from Sam Harris, Joseph Goldstein, and the Moral Landscape:
By Justin Harnishwith Justin Harnish & Nick Baguley
In Episode 7, Justin and Nick step directly into one of the most complex frontiers in emergent AI: machine ethics — what it means for advanced AI systems to behave ethically, understand values, support human flourishing, and possibly one day feel moral weight.
This episode builds on themes from the AI Goals Forecast (AI-2027), embodied cognition, consciousness, and the hard technical realities of encoding values into agentic systems.
🔍 Episode SummaryEthics is no longer just a philosophical debate — it’s now a design constraint for powerful AI systems capable of autonomous action. Justin and Nick unpack:
They trace ethics from Aristotle to AI-2027’s goal-based architectures, to Damasio’s embodied consciousness, to Sam Harris’ view of consciousness and the illusion of self, to the hard problem of whether a machine can experience moral stakes.
đź§ Â Major Topics Covered1. What Do We Mean by Ethics?Justin and Nick begin by grounding ethics in its philosophical roots:
Ethos → virtue → flourishing.
Ethics isn’t just rule-following — it’s about character, intention, and outcomes.
They connect this to the ways AI is already making decisions in vehicles, financial systems, healthcare, and human relationships.
2. AI Goals & CorrigibilityAI-2027 outlines a hierarchy of AI goal types — from written specifications to unintended proxies to reward hacking to self-preservation drives.
Nick explains why corrigibility — the ability for AI to accept shutdown or redirection — is foundational.
Anthropic’s Constitutional AI makes an appearance as a real-world example.
3. Goals vs. ValuesJustin distinguishes between:
AI may follow rules without understanding values — similar to a child with chores but no moral context.
This raises the key question:
Can a system have values without consciousness?
4. Is Consciousness Required for Ethics?A major thread of the episode:
Is a non-conscious “zombie” AI capable of morality?
5. Embodiment & EmpathyJustin and Nick explore whether AI needs a body — or at least a simulated body — to:
This touches robotics, synthetic emotions, and the debate over “felt consciousness.”
Nick highlights the massive cultural gap in AI performance:
This matters for fairness, safety, and global ethics.
A surprising turn: AI’s ability to help humans improve moral clarity.
Justin draws from Sam Harris, Joseph Goldstein, and the Moral Landscape: