
Sign up to save your podcasts
Or


Episode 143
I spoke with Iason Gabriel about:
* Value alignment
* Technology and worldmaking
* How AI systems affect individuals and the social world
Iason is a philosopher and Senior Staff Research Scientist at Google DeepMind. His work focuses on the ethics of artificial intelligence, including questions about AI value alignment, distributive justice, language ethics and human rights.
You can find him on his website and Twitter/X.
Find me on Twitter (or LinkedIn if you want…) for updates, and reach me at [email protected] for feedback, ideas, guest suggestions.
Outline
* (00:00) Intro
* (01:18) Iason’s intellectual development
* (04:28) Aligning language models with human values, democratic civility and agonism
* (08:20) Overlapping consensus, differing norms, procedures for identifying norms
* (13:27) Rawls’ theory of justice, the justificatory and stability problems
* (19:18) Aligning LLMs and cooperation, speech acts, justification and discourse norms, literacy
* (23:45) Actor Network Theory and alignment
* (27:25) Value alignment and Iason’s starting points
* (33:10) The Ethics of Advanced AI Assistants, AI’s impacts on social processes and users, personalization
* (37:50) AGI systems and social power
* (39:00) Displays of care and compassion, Machine Love (Joel Lehman)
* (41:30) Virtue ethics, morality and language, virtue in AI systems vs. MacIntyre’s conception in After Virtue
* (45:00) The Challenge of Value Alignment
* (45:25) Technologists as worldmakers
* (51:30) Technological determinism, collective action problems
* (55:25) Iason’s goals with his work
* (58:32) Outro
Links
Papers:
* AI, Values, and Alignment (2020)
* Aligning LMs with Human Values (2023)
* Toward a Theory of Justice for AI (2023)
* The Ethics of Advanced AI Assistants (2024)
* A matter of principle? AI alignment as the fair treatment of claims (2025)
By Daniel Bashir4.7
4747 ratings
Episode 143
I spoke with Iason Gabriel about:
* Value alignment
* Technology and worldmaking
* How AI systems affect individuals and the social world
Iason is a philosopher and Senior Staff Research Scientist at Google DeepMind. His work focuses on the ethics of artificial intelligence, including questions about AI value alignment, distributive justice, language ethics and human rights.
You can find him on his website and Twitter/X.
Find me on Twitter (or LinkedIn if you want…) for updates, and reach me at [email protected] for feedback, ideas, guest suggestions.
Outline
* (00:00) Intro
* (01:18) Iason’s intellectual development
* (04:28) Aligning language models with human values, democratic civility and agonism
* (08:20) Overlapping consensus, differing norms, procedures for identifying norms
* (13:27) Rawls’ theory of justice, the justificatory and stability problems
* (19:18) Aligning LLMs and cooperation, speech acts, justification and discourse norms, literacy
* (23:45) Actor Network Theory and alignment
* (27:25) Value alignment and Iason’s starting points
* (33:10) The Ethics of Advanced AI Assistants, AI’s impacts on social processes and users, personalization
* (37:50) AGI systems and social power
* (39:00) Displays of care and compassion, Machine Love (Joel Lehman)
* (41:30) Virtue ethics, morality and language, virtue in AI systems vs. MacIntyre’s conception in After Virtue
* (45:00) The Challenge of Value Alignment
* (45:25) Technologists as worldmakers
* (51:30) Technological determinism, collective action problems
* (55:25) Iason’s goals with his work
* (58:32) Outro
Links
Papers:
* AI, Values, and Alignment (2020)
* Aligning LMs with Human Values (2023)
* Toward a Theory of Justice for AI (2023)
* The Ethics of Advanced AI Assistants (2024)
* A matter of principle? AI alignment as the fair treatment of claims (2025)

230,021 Listeners

1,094 Listeners

349 Listeners

4,176 Listeners

209 Listeners

6,114 Listeners

10,230 Listeners

548 Listeners

5,547 Listeners

15,875 Listeners

29,337 Listeners

14 Listeners

26 Listeners