
Sign up to save your podcasts
Or
How should we think about the technical problem of building smarter-than-human AI that does what we want? When and how should AI systems defer to us? Should they have their own goals, and how should those goals be managed? In this episode, Dylan Hadfield-Menell talks about his work on assistance games that formalizes these questions. The first couple years of my PhD program included many long conversations with Dylan that helped shape how I view AI x-risk research, so it was great to have another one in the form of a recorded interview.
Link to the transcript: axrp.net/episode/2021/06/08/episode-8-assistance-games-dylan-hadfield-menell.html
Link to the paper "Cooperative Inverse Reinforcement Learning": arxiv.org/abs/1606.03137
Link to the paper "The Off-Switch Game": arxiv.org/abs/1611.08219
Link to the paper "Inverse Reward Design": arxiv.org/abs/1711.02827
Dylan's twitter account: twitter.com/dhadfieldmenell
Link to apply to the MIT EECS graduate program: gradapply.mit.edu/eecs/apply/login/?next=/eecs/
Other work mentioned in the discussion:
- The original paper on inverse optimal control: asmedigitalcollection.asme.org/fluidsengineering/article-abstract/86/1/51/392203/When-Is-a-Linear-Control-System-Optimal
- Justin Fu's research on, among other things, adversarial IRL: scholar.google.com/citations?user=T9To2C0AAAAJ&hl=en&oi=ao
- Preferences implicit in the state of the world: arxiv.org/abs/1902.04198
- What are you optimizing for? Aligning recommender systems with human values: participatoryml.github.io/papers/2020/42.pdf
- The Assistive Multi-Armed Bandit: arxiv.org/abs/1901.08654
- Soares et al. on Corrigibility: openreview.net/forum?id=H1bIT1buWH
- Should Robots be Obedient?: arxiv.org/abs/1705.09990
- Rodney Brooks on the Seven Deadly Sins of Predicting the Future of AI: rodneybrooks.com/the-seven-deadly-sins-of-predicting-the-future-of-ai/
- Products in category theory: en.wikipedia.org/wiki/Product_(category_theory)
- AXRP Episode 7 - Side Effects with Victoria Krakovna: axrp.net/episode/2021/05/14/episode-7-side-effects-victoria-krakovna.html
- Attainable Utility Preservation: arxiv.org/abs/1902.09725
- Penalizing side effects using stepwise relative reachability: arxiv.org/abs/1806.01186
- Simplifying Reward Design through Divide-and-Conquer: arxiv.org/abs/1806.02501
- Active Inverse Reward Design: arxiv.org/abs/1809.03060
- An Efficient, Generalized Bellman Update For Cooperative Inverse Reinforcement Learning: proceedings.mlr.press/v80/malik18a.html
- Incomplete Contracting and AI Alignment: arxiv.org/abs/1804.04268
- Multi-Principal Assistance Games: arxiv.org/abs/2007.09540
- Consequences of Misaligned AI: arxiv.org/abs/2102.03896
4.4
88 ratings
How should we think about the technical problem of building smarter-than-human AI that does what we want? When and how should AI systems defer to us? Should they have their own goals, and how should those goals be managed? In this episode, Dylan Hadfield-Menell talks about his work on assistance games that formalizes these questions. The first couple years of my PhD program included many long conversations with Dylan that helped shape how I view AI x-risk research, so it was great to have another one in the form of a recorded interview.
Link to the transcript: axrp.net/episode/2021/06/08/episode-8-assistance-games-dylan-hadfield-menell.html
Link to the paper "Cooperative Inverse Reinforcement Learning": arxiv.org/abs/1606.03137
Link to the paper "The Off-Switch Game": arxiv.org/abs/1611.08219
Link to the paper "Inverse Reward Design": arxiv.org/abs/1711.02827
Dylan's twitter account: twitter.com/dhadfieldmenell
Link to apply to the MIT EECS graduate program: gradapply.mit.edu/eecs/apply/login/?next=/eecs/
Other work mentioned in the discussion:
- The original paper on inverse optimal control: asmedigitalcollection.asme.org/fluidsengineering/article-abstract/86/1/51/392203/When-Is-a-Linear-Control-System-Optimal
- Justin Fu's research on, among other things, adversarial IRL: scholar.google.com/citations?user=T9To2C0AAAAJ&hl=en&oi=ao
- Preferences implicit in the state of the world: arxiv.org/abs/1902.04198
- What are you optimizing for? Aligning recommender systems with human values: participatoryml.github.io/papers/2020/42.pdf
- The Assistive Multi-Armed Bandit: arxiv.org/abs/1901.08654
- Soares et al. on Corrigibility: openreview.net/forum?id=H1bIT1buWH
- Should Robots be Obedient?: arxiv.org/abs/1705.09990
- Rodney Brooks on the Seven Deadly Sins of Predicting the Future of AI: rodneybrooks.com/the-seven-deadly-sins-of-predicting-the-future-of-ai/
- Products in category theory: en.wikipedia.org/wiki/Product_(category_theory)
- AXRP Episode 7 - Side Effects with Victoria Krakovna: axrp.net/episode/2021/05/14/episode-7-side-effects-victoria-krakovna.html
- Attainable Utility Preservation: arxiv.org/abs/1902.09725
- Penalizing side effects using stepwise relative reachability: arxiv.org/abs/1806.01186
- Simplifying Reward Design through Divide-and-Conquer: arxiv.org/abs/1806.02501
- Active Inverse Reward Design: arxiv.org/abs/1809.03060
- An Efficient, Generalized Bellman Update For Cooperative Inverse Reinforcement Learning: proceedings.mlr.press/v80/malik18a.html
- Incomplete Contracting and AI Alignment: arxiv.org/abs/1804.04268
- Multi-Principal Assistance Games: arxiv.org/abs/2007.09540
- Consequences of Misaligned AI: arxiv.org/abs/2102.03896
26,377 Listeners
2,397 Listeners
1,779 Listeners
296 Listeners
104 Listeners
4,097 Listeners
87 Listeners
281 Listeners
88 Listeners
354 Listeners
199 Listeners
63 Listeners
64 Listeners
136 Listeners
116 Listeners