
Sign up to save your podcasts
Or


Dive into one of AI's most critical challenges - how to make machines that share our values and goals. Each episode explores different aspects of the alignment problem, from machine learning basics to cutting-edge solutions. Through expert analysis, understand how AI systems learn, why they sometimes fail to align with human values, and what we can learn from psychology about building safer AI. Whether examining algorithmic bias or exploring inverse reinforcement learning, discover why aligning AI with human values is crucial for our future.
By Future Center Ventures, Mark M. Whelan5
22 ratings
Dive into one of AI's most critical challenges - how to make machines that share our values and goals. Each episode explores different aspects of the alignment problem, from machine learning basics to cutting-edge solutions. Through expert analysis, understand how AI systems learn, why they sometimes fail to align with human values, and what we can learn from psychology about building safer AI. Whether examining algorithmic bias or exploring inverse reinforcement learning, discover why aligning AI with human values is crucial for our future.

229,013 Listeners

1,021 Listeners

3,860 Listeners

16,097 Listeners

74 Listeners

3 Listeners