
Sign up to save your podcasts
Or


This and all episodes at: https://aiandyou.net/ .
We're talking about catastrophic risks, something that can be depressing for people who haven’t confronted these things before, and so I have had to be careful in talking about those with most audiences. Yet the paradox is that the more you do look at those risks, the more that effect fades, and that’s a good thing, because my guest today is someone who takes on the onerous task of thinking about and doing something about those risks every day. Seth Baum is the co-founder and Executive Director of the Global Catastrophic Risks Institute in New York, which has tackled the biggest of big problems since 2011. He is also a research affiliate at the Cambridge Centre for the Study of Existential Risk. He’s authored papers on pandemics, nuclear winter, and notably for our show, AI.
We talk about national bias in models, coherent extrapolated volition – like, what is it – the risks inherent in a world of numerous different models, and using AI itself to solve some of these problems.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
By aiandyou5
1010 ratings
This and all episodes at: https://aiandyou.net/ .
We're talking about catastrophic risks, something that can be depressing for people who haven’t confronted these things before, and so I have had to be careful in talking about those with most audiences. Yet the paradox is that the more you do look at those risks, the more that effect fades, and that’s a good thing, because my guest today is someone who takes on the onerous task of thinking about and doing something about those risks every day. Seth Baum is the co-founder and Executive Director of the Global Catastrophic Risks Institute in New York, which has tackled the biggest of big problems since 2011. He is also a research affiliate at the Cambridge Centre for the Study of Existential Risk. He’s authored papers on pandemics, nuclear winter, and notably for our show, AI.
We talk about national bias in models, coherent extrapolated volition – like, what is it – the risks inherent in a world of numerous different models, and using AI itself to solve some of these problems.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.

32,222 Listeners

229,711 Listeners

14,351 Listeners

1,300 Listeners

168 Listeners

544 Listeners

346 Listeners

10,246 Listeners

552 Listeners

512 Listeners

5,599 Listeners

143 Listeners

227 Listeners

682 Listeners

87 Listeners