80,000 Hours Podcast

#28 - Owen Cotton-Barratt on why scientists should need insurance, PhD strategy & fast AI progresses


Listen Later

A researcher is working on creating a new virus – one more dangerous than any that exist naturally. They believe they’re being as careful as possible. After all, if things go wrong, their own life and that of their colleagues will be in danger. But if an accident is capable of triggering a global pandemic – hundreds of millions of lives might be at risk. How much additional care will the researcher actually take in the face of such a staggering death toll?

 In a new paper Dr Owen Cotton-Barratt, a Research Fellow at Oxford University’s Future of Humanity Institute, argues it’s impossible to expect them to make the correct adjustments. If they have an accident that kills 5 people – they’ll feel extremely bad. If they have an accident that kills 500 million people, they’ll feel even worse – but there’s no way for them to feel 100 million times worse. The brain simply doesn’t work that way.

 So, rather than relying on individual judgement, we could create a system that would lead to better outcomes: research liability insurance.

 Links to learn more, summary and full transcript.

Once an insurer assesses how much damage a particular project is expected to cause and with what likelihood – in order to proceed, the researcher would need to take out insurance against the predicted risk. In return, the insurer promises that they’ll pay out – potentially tens of billions of dollars – if things go really badly.

This would force researchers think very carefully about the cost and benefits of their work – and incentivize the insurer to demand safety standards on a level that individual researchers can’t be expected to impose themselves.

***Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type '80,000 Hours' into your podcasting app.***

Owen is currently hiring for a selective, two-year research scholars programme at Oxford.

In this wide-ranging conversation Owen and I also discuss:

* Are academics wrong to value personal interest in a topic over its importance?
* What fraction of research has very large potential negative consequences?
* Why do we have such different reactions to situations where the risks are known and unknown?
* The downsides of waiting for tenure to do the work you think is most important.
* What are the benefits of specifying a vague problem like ‘make AI safe’ more clearly?
* How should people balance the trade-offs between having a successful career and doing the most important work?
* Are there any blind alleys we’ve gone down when thinking about AI safety?
* Why did Owen give to an organisation whose research agenda he is skeptical of?

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: search for '80,000 Hours' in your podcasting app.

The 80,000 Hours Podcast is produced by Keiran Harris.

...more
View all episodesView all episodes
Download on the App Store

80,000 Hours PodcastBy Rob, Luisa, and the 80000 Hours team

  • 4.7
  • 4.7
  • 4.7
  • 4.7
  • 4.7

4.7

299 ratings


More shows like 80,000 Hours Podcast

View all
Making Sense with Sam Harris by Sam Harris

Making Sense with Sam Harris

26,361 Listeners

EconTalk by Russ Roberts

EconTalk

4,272 Listeners

Conversations with Tyler by Mercatus Center at George Mason University

Conversations with Tyler

2,428 Listeners

a16z Podcast by Andreessen Horowitz

a16z Podcast

1,093 Listeners

Future of Life Institute Podcast by Future of Life Institute

Future of Life Institute Podcast

107 Listeners

Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas by Sean Carroll | Wondery

Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas

4,150 Listeners

ManifoldOne by Steve Hsu

ManifoldOne

92 Listeners

Dwarkesh Podcast by Dwarkesh Patel

Dwarkesh Podcast

72 Listeners

Hard Fork by The New York Times

Hard Fork

5,471 Listeners

Clearer Thinking with Spencer Greenberg by Spencer Greenberg

Clearer Thinking with Spencer Greenberg

139 Listeners

No Priors: Artificial Intelligence | Technology | Startups by Conviction

No Priors: Artificial Intelligence | Technology | Startups

131 Listeners

Latent Space: The AI Engineer Podcast by swyx + Alessio

Latent Space: The AI Engineer Podcast

94 Listeners

"Econ 102" with Noah Smith and Erik Torenberg by Turpentine

"Econ 102" with Noah Smith and Erik Torenberg

152 Listeners

Complex Systems with Patrick McKenzie (patio11) by Patrick McKenzie

Complex Systems with Patrick McKenzie (patio11)

133 Listeners

The Marginal Revolution Podcast by Mercatus Center at George Mason University

The Marginal Revolution Podcast

93 Listeners