
Sign up to save your podcasts
Or


Read the full transcript here.
Is AI that's both superintelligent and aligned even possible? Does increased intelligence necessarily entail decreased controllability? What's the difference between "safe" and "under control"? There seems to be a fundamental tension between autonomy and control, so is it conceivable that we could create superintelligent AIs that are both autonomous enough to do things that matter and also controllable enough for us to manage them? Is general intelligence needed for anything that matters? What kinds of regulations on AI might help to ensure a safe future? Should we stop working towards superintelligent AI completely? How hard would it be to implement a global ban on superintelligent AI development? What might good epistemic infrastructure look like? What's the right way to think about entropy? What kinds of questions are prediction markets best suited to answer? How can we move from having good predictions to making good decisions? Are we living in a simulation? Is it a good idea to make AI models open-source?
Anthony Aguirre is the Executive Director of the Future of Life Institute, an NGO examining the implications of transformative technologies, particularly AI. He is also the Faggin Professor of the Physics of Information at UC Santa Cruz, where his research spans foundational physics to AI policy. Aguirre co-founded Metaculus, a platform leveraging collective intelligence to forecast science and technology developments, and the Foundational Questions Institute, supporting fundamental physics research. Aguirre did his PhD at Harvard University and Postdoctoral work as a member at the Institute for Advanced Study in Princeton. Learn more about him at his website, anthony-aguirre.com; follow him on X / Twitter at @anthonynaguirre, or email him at [email protected].
Further reading
Staff
Music
Affiliates
By Spencer Greenberg4.8
132132 ratings
Read the full transcript here.
Is AI that's both superintelligent and aligned even possible? Does increased intelligence necessarily entail decreased controllability? What's the difference between "safe" and "under control"? There seems to be a fundamental tension between autonomy and control, so is it conceivable that we could create superintelligent AIs that are both autonomous enough to do things that matter and also controllable enough for us to manage them? Is general intelligence needed for anything that matters? What kinds of regulations on AI might help to ensure a safe future? Should we stop working towards superintelligent AI completely? How hard would it be to implement a global ban on superintelligent AI development? What might good epistemic infrastructure look like? What's the right way to think about entropy? What kinds of questions are prediction markets best suited to answer? How can we move from having good predictions to making good decisions? Are we living in a simulation? Is it a good idea to make AI models open-source?
Anthony Aguirre is the Executive Director of the Future of Life Institute, an NGO examining the implications of transformative technologies, particularly AI. He is also the Faggin Professor of the Physics of Information at UC Santa Cruz, where his research spans foundational physics to AI policy. Aguirre co-founded Metaculus, a platform leveraging collective intelligence to forecast science and technology developments, and the Foundational Questions Institute, supporting fundamental physics research. Aguirre did his PhD at Harvard University and Postdoctoral work as a member at the Institute for Advanced Study in Princeton. Learn more about him at his website, anthony-aguirre.com; follow him on X / Twitter at @anthonynaguirre, or email him at [email protected].
Further reading
Staff
Music
Affiliates

2,676 Listeners

2,693 Listeners

26,348 Listeners

4,270 Listeners

2,456 Listeners

138 Listeners

938 Listeners

4,170 Listeners

95 Listeners

517 Listeners

554 Listeners

150 Listeners

45 Listeners

96 Listeners

1,105 Listeners