
Sign up to save your podcasts
Or
This and all episodes at: https://aiandyou.net/ .
We're talking about catastrophic risks, something that can be depressing for people who haven’t confronted these things before, and so I have had to be careful in talking about those with most audiences. Yet the paradox is that the more you do look at those risks, the more that effect fades, and that’s a good thing, because my guest today is someone who takes on the onerous task of thinking about and doing something about those risks every day. Seth Baum is the co-founder and Executive Director of the Global Catastrophic Risks Institute in New York, which has tackled the biggest of big problems since 2011. He is also a research affiliate at the Cambridge Centre for the Study of Existential Risk. He’s authored papers on pandemics, nuclear winter, and notably for our show, AI.
We talk about national bias in models, coherent extrapolated volition – like, what is it – the risks inherent in a world of numerous different models, and using AI itself to solve some of these problems.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
5
88 ratings
This and all episodes at: https://aiandyou.net/ .
We're talking about catastrophic risks, something that can be depressing for people who haven’t confronted these things before, and so I have had to be careful in talking about those with most audiences. Yet the paradox is that the more you do look at those risks, the more that effect fades, and that’s a good thing, because my guest today is someone who takes on the onerous task of thinking about and doing something about those risks every day. Seth Baum is the co-founder and Executive Director of the Global Catastrophic Risks Institute in New York, which has tackled the biggest of big problems since 2011. He is also a research affiliate at the Cambridge Centre for the Study of Existential Risk. He’s authored papers on pandemics, nuclear winter, and notably for our show, AI.
We talk about national bias in models, coherent extrapolated volition – like, what is it – the risks inherent in a world of numerous different models, and using AI itself to solve some of these problems.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
7,901 Listeners
2,141 Listeners
899 Listeners
26,469 Listeners
298 Listeners
217 Listeners
198 Listeners
114 Listeners
9,189 Listeners
417 Listeners
5,448 Listeners
3,276 Listeners
75 Listeners
485 Listeners
248 Listeners