
Sign up to save your podcasts
Or
In this episode, Justin and Nick dive into The Alignment Problem—one of the most pressing challenges in AI development. Can we ensure that AI systems align with human values and intentions? What happens when AI behavior diverges from what we expect or desire?
Drawing on real-world examples, academic research, and philosophical thought experiments, they explore the risks and opportunities AI presents. From misaligned AI causing unintended consequences to the broader existential question of intelligence in the universe, this conversation tackles the complexity of AI ethics, governance, and emergent behavior.
They also discuss historical perspectives on automation, regulatory concerns, and the possible future of AI—whether it leads to existential risk or a utopian technological renaissance.
Understanding the AI Alignment Problem – Why AI alignment matters and its real-world implications.
Why Not Just ‘Pull the Plug’ on AI? – A philosophical and practical discussion.
Emergent AI & Unpredictability – How AI learns in ways we can’t always foresee.
Historical Parallels – Lessons from past industrial and technological revolutions.
The Great Filter & The Fermi Paradox – Could AI be part of humanity’s existential challenge?
The Ethics of AI Decision-Making – The real-world trolley problem and AI’s moral choices.
Can AI Ever Be Truly ‘Aligned’ with Humans? – Challenges of defining and enforcing values.
Industry & Regulation – How governments and businesses are handling AI risks.
What Happens When AI Becomes Conscious? – A preview of the next episode’s deep dive.
The Alignment Problem – Brian Christian
Human Compatible – Stuart Russell
Superintelligence – Nick Bostrom
The Second Machine Age – Erik Brynjolfsson & Andrew McAfee
The End of Work – Jeremy Rifkin
The Demon in the Machine – Paul Davies
Anarchy, State, and Utopia – Robert Nozick
🔹 Nick’s Pick: Cursor – An AI-powered coding assistant that enhances development workflows.
🔹 Justin’s Pick: Leveraging Enterprise AI – Make use of company-approved AI tools for efficiency and insight.
In Part 2 of “The Alignment Problem”, we’ll explore:
🔹 Can an AI be truly conscious, and would that change alignment?
🔹 What responsibilities would we have toward a sentient AI?
🔹 Could AI help us become better moral actors?
The Emergent AI website Subscribe & stay tuned!
We want to hear your thoughts on our aligned future!
Justin’s Homepage - https://justinaharnish.com
Justin’s Substack - https://ordinaryilluminated.substack.com
Justin’s LinkedIn - https://www.linkedin.com/in/justinharnish/
Nick’s LinkedIn - https://www.linkedin.com/in/nickbaguley/
Like, Subscribe & Review on your favorite podcast platform!
Final Thought: Are we heading toward an AI utopia or existential risk? The answer may depend on how we approach alignment today.
In this episode, Justin and Nick dive into The Alignment Problem—one of the most pressing challenges in AI development. Can we ensure that AI systems align with human values and intentions? What happens when AI behavior diverges from what we expect or desire?
Drawing on real-world examples, academic research, and philosophical thought experiments, they explore the risks and opportunities AI presents. From misaligned AI causing unintended consequences to the broader existential question of intelligence in the universe, this conversation tackles the complexity of AI ethics, governance, and emergent behavior.
They also discuss historical perspectives on automation, regulatory concerns, and the possible future of AI—whether it leads to existential risk or a utopian technological renaissance.
Understanding the AI Alignment Problem – Why AI alignment matters and its real-world implications.
Why Not Just ‘Pull the Plug’ on AI? – A philosophical and practical discussion.
Emergent AI & Unpredictability – How AI learns in ways we can’t always foresee.
Historical Parallels – Lessons from past industrial and technological revolutions.
The Great Filter & The Fermi Paradox – Could AI be part of humanity’s existential challenge?
The Ethics of AI Decision-Making – The real-world trolley problem and AI’s moral choices.
Can AI Ever Be Truly ‘Aligned’ with Humans? – Challenges of defining and enforcing values.
Industry & Regulation – How governments and businesses are handling AI risks.
What Happens When AI Becomes Conscious? – A preview of the next episode’s deep dive.
The Alignment Problem – Brian Christian
Human Compatible – Stuart Russell
Superintelligence – Nick Bostrom
The Second Machine Age – Erik Brynjolfsson & Andrew McAfee
The End of Work – Jeremy Rifkin
The Demon in the Machine – Paul Davies
Anarchy, State, and Utopia – Robert Nozick
🔹 Nick’s Pick: Cursor – An AI-powered coding assistant that enhances development workflows.
🔹 Justin’s Pick: Leveraging Enterprise AI – Make use of company-approved AI tools for efficiency and insight.
In Part 2 of “The Alignment Problem”, we’ll explore:
🔹 Can an AI be truly conscious, and would that change alignment?
🔹 What responsibilities would we have toward a sentient AI?
🔹 Could AI help us become better moral actors?
The Emergent AI website Subscribe & stay tuned!
We want to hear your thoughts on our aligned future!
Justin’s Homepage - https://justinaharnish.com
Justin’s Substack - https://ordinaryilluminated.substack.com
Justin’s LinkedIn - https://www.linkedin.com/in/justinharnish/
Nick’s LinkedIn - https://www.linkedin.com/in/nickbaguley/
Like, Subscribe & Review on your favorite podcast platform!
Final Thought: Are we heading toward an AI utopia or existential risk? The answer may depend on how we approach alignment today.