
Sign up to save your podcasts
Or
This and all episodes at: https://aiandyou.net/ .
We're talking about catastrophic risks, something that can be depressing for people who haven’t confronted these things before, and so I have had to be careful in talking about those with most audiences. Yet the paradox is that the more you do look at those risks, the more that effect fades, and that’s a good thing, because my guest today is someone who takes on the onerous task of thinking about and doing something about those risks every day. Seth Baum is the co-founder and Executive Director of the Global Catastrophic Risks Institute in New York, which has tackled the biggest of big problems since 2011. He is also a research affiliate at the Cambridge Centre for the Study of Existential Risk. He’s authored papers on pandemics, nuclear winter, and notably for our show, AI.
We talk about national bias in models, coherent extrapolated volition – like, what is it – the risks inherent in a world of numerous different models, and using AI itself to solve some of these problems.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
5
88 ratings
This and all episodes at: https://aiandyou.net/ .
We're talking about catastrophic risks, something that can be depressing for people who haven’t confronted these things before, and so I have had to be careful in talking about those with most audiences. Yet the paradox is that the more you do look at those risks, the more that effect fades, and that’s a good thing, because my guest today is someone who takes on the onerous task of thinking about and doing something about those risks every day. Seth Baum is the co-founder and Executive Director of the Global Catastrophic Risks Institute in New York, which has tackled the biggest of big problems since 2011. He is also a research affiliate at the Cambridge Centre for the Study of Existential Risk. He’s authored papers on pandemics, nuclear winter, and notably for our show, AI.
We talk about national bias in models, coherent extrapolated volition – like, what is it – the risks inherent in a world of numerous different models, and using AI itself to solve some of these problems.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
7,763 Listeners
2,112 Listeners
901 Listeners
26,358 Listeners
296 Listeners
206 Listeners
203 Listeners
114 Listeners
8,768 Listeners
354 Listeners
5,356 Listeners
3,235 Listeners
73 Listeners
428 Listeners
234 Listeners