
Sign up to save your podcasts
Or
Read the full transcript here.
Is AI that's both superintelligent and aligned even possible? Does increased intelligence necessarily entail decreased controllability? What's the difference between "safe" and "under control"? There seems to be a fundamental tension between autonomy and control, so is it conceivable that we could create superintelligent AIs that are both autonomous enough to do things that matter and also controllable enough for us to manage them? Is general intelligence needed for anything that matters? What kinds of regulations on AI might help to ensure a safe future? Should we stop working towards superintelligent AI completely? How hard would it be to implement a global ban on superintelligent AI development? What might good epistemic infrastructure look like? What's the right way to think about entropy? What kinds of questions are prediction markets best suited to answer? How can we move from having good predictions to making good decisions? Are we living in a simulation? Is it a good idea to make AI models open-source?
Anthony Aguirre is the Executive Director of the Future of Life Institute, an NGO examining the implications of transformative technologies, particularly AI. He is also the Faggin Professor of the Physics of Information at UC Santa Cruz, where his research spans foundational physics to AI policy. Aguirre co-founded Metaculus, a platform leveraging collective intelligence to forecast science and technology developments, and the Foundational Questions Institute, supporting fundamental physics research. Aguirre did his PhD at Harvard University and Postdoctoral work as a member at the Institute for Advanced Study in Princeton. Learn more about him at his website, anthony-aguirre.com; follow him on X / Twitter at @anthonynaguirre, or email him at [email protected].
Further reading
Staff
Music
Affiliates
4.8
126126 ratings
Read the full transcript here.
Is AI that's both superintelligent and aligned even possible? Does increased intelligence necessarily entail decreased controllability? What's the difference between "safe" and "under control"? There seems to be a fundamental tension between autonomy and control, so is it conceivable that we could create superintelligent AIs that are both autonomous enough to do things that matter and also controllable enough for us to manage them? Is general intelligence needed for anything that matters? What kinds of regulations on AI might help to ensure a safe future? Should we stop working towards superintelligent AI completely? How hard would it be to implement a global ban on superintelligent AI development? What might good epistemic infrastructure look like? What's the right way to think about entropy? What kinds of questions are prediction markets best suited to answer? How can we move from having good predictions to making good decisions? Are we living in a simulation? Is it a good idea to make AI models open-source?
Anthony Aguirre is the Executive Director of the Future of Life Institute, an NGO examining the implications of transformative technologies, particularly AI. He is also the Faggin Professor of the Physics of Information at UC Santa Cruz, where his research spans foundational physics to AI policy. Aguirre co-founded Metaculus, a platform leveraging collective intelligence to forecast science and technology developments, and the Foundational Questions Institute, supporting fundamental physics research. Aguirre did his PhD at Harvard University and Postdoctoral work as a member at the Institute for Advanced Study in Princeton. Learn more about him at his website, anthony-aguirre.com; follow him on X / Twitter at @anthonynaguirre, or email him at [email protected].
Further reading
Staff
Music
Affiliates
4,247 Listeners
1,707 Listeners
2,653 Listeners
26,373 Listeners
2,402 Listeners
10,663 Listeners
896 Listeners
122 Listeners
90 Listeners
424 Listeners
15,529 Listeners
60 Listeners
144 Listeners
44 Listeners
124 Listeners