
Sign up to save your podcasts
Or


Ryan is Co-Director of the ML Alignment Theory Scholars Program, a Board Member and Co-Founder of the London Initiative for Safe AI, and a Manifund Regrantor. Previously, he completed a PhD in Physics at the University of Queensland and ran UQ’s Effective Altruism student group for ~3 years. Ryan’s ethics are largely preference utilitarian and cosmopolitan; he is deeply concerned about near-term x-risk and safeguarding the long-term future.
By Aaron BergmanRyan is Co-Director of the ML Alignment Theory Scholars Program, a Board Member and Co-Founder of the London Initiative for Safe AI, and a Manifund Regrantor. Previously, he completed a PhD in Physics at the University of Queensland and ran UQ’s Effective Altruism student group for ~3 years. Ryan’s ethics are largely preference utilitarian and cosmopolitan; he is deeply concerned about near-term x-risk and safeguarding the long-term future.