80,000 Hours Podcast

#68 - Will MacAskill on the paralysis argument, whether we're at the hinge of history, & his new priorities


Listen Later

You’re given a box with a set of dice in it. If you roll an even number, a person's life is saved. If you roll an odd number, someone else will die. Each time you shake the box you get $10. Should you do it?

 A committed consequentialist might say, "Sure! Free money!" But most will think it obvious that you should say no. You've only gotten a tiny benefit, in exchange for moral responsibility over whether other people live or die.

And yet, according to today’s return guest, philosophy Prof Will MacAskill, in a real sense we’re shaking this box every time we leave the house, and those who think shaking the box is wrong should probably also be shutting themselves indoors and minimising their interactions with others.

Links to learn more, summary and full transcript.
Job opportunities at the Global Priorities Institute.

To see this, imagine you’re deciding whether to redeem a coupon for a free movie. If you go, you’ll need to drive to the cinema. By affecting traffic throughout the city, you’ll have slightly impacted the schedules of thousands or tens of thousands of people. The average life is about 30,000 days, and over the course of a life the average person will have about two children. So — if you’ve impacted at least 7,500 days — then, statistically speaking, you've probably influenced the exact timing of a conception event. With 200 million sperm in the running each time, changing the moment of copulation, even by a fraction of a second, will almost certainly mean you've changed the identity of a future person.

That different child will now impact all sorts of things as they go about their life, including future conception events. And then those new people will impact further future conceptions events, and so on. After 100 or maybe 200 years, basically everybody alive will be a different person because you went to the movies.

As a result, you’ll have changed when many people die. Take car crashes as one example: about 1.3% of people die in car crashes. Over that century, as the identities of everyone change as a result of your action, many of the 'new' people will cause car crashes that wouldn't have occurred in their absence, including crashes that prematurely kill people alive today.

Of course, in expectation, exactly the same number of people will have been saved from car crashes, and will die later than they would have otherwise.

So, if you go for this drive, you’ll save hundreds of people from premature death, and cause the early death of an equal number of others. But you’ll get to see a free movie, worth $10. Should you do it?

This setup forms the basis of ‘the paralysis argument’, explored in one of Will’s recent papers.

Because most 'non-consequentialists' endorse an act/omission distinction… post truncated due to character limit, finish reading the full explanation here.

So what's the best way to fix this strange conclusion? We discuss a few options, but the most promising might bring people a lot closer to full consequentialism than is immediately apparent. In this episode Will and I also cover:

• Are, or are we not, living in the most influential time in history?
• The culture of the effective altruism community
• Will's new lower estimate of the risk of human extinction
• Why Will is now less focused on AI
• The differences between Americans and Brits
• Why feeling guilty about characteristics you were born with is crazy
• And plenty more.

Chapters:

  • Rob’s intro (00:00:00)
  • The interview begins (00:04:03)
  • The paralysis argument (00:15:42)
  • The case for strong longtermism (00:55:21)
  • Longtermism for risk-averse altruists (00:58:01)
  • Are we living in the most influential time in history? (01:14:37)
  • The risk of human extinction in the next hundred years (02:15:20)
  • Implications for the effective altruism community (02:50:03)
  • Culture of the effective altruism community (03:06:28)

Producer: Keiran Harris. 
Audio mastering: Ben Cordell. 
Transcriptions: Zakee Ulhaq.

...more
View all episodesView all episodes
Download on the App Store

80,000 Hours PodcastBy Rob, Luisa, and the 80,000 Hours team

  • 4.8
  • 4.8
  • 4.8
  • 4.8
  • 4.8

4.8

280 ratings


More shows like 80,000 Hours Podcast

View all
EconTalk by Russ Roberts

EconTalk

4,235 Listeners

Making Sense with Sam Harris by Sam Harris

Making Sense with Sam Harris

26,323 Listeners

Conversations with Tyler by Mercatus Center at George Mason University

Conversations with Tyler

2,396 Listeners

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence) by Sam Charrington

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

444 Listeners

The Joe Walker Podcast by Joe Walker

The Joe Walker Podcast

121 Listeners

Machine Learning Street Talk (MLST) by Machine Learning Street Talk (MLST)

Machine Learning Street Talk (MLST)

88 Listeners

Dwarkesh Podcast by Dwarkesh Patel

Dwarkesh Podcast

356 Listeners

Clearer Thinking with Spencer Greenberg by Spencer Greenberg

Clearer Thinking with Spencer Greenberg

128 Listeners

No Priors: Artificial Intelligence | Technology | Startups by Conviction

No Priors: Artificial Intelligence | Technology | Startups

123 Listeners

Latent Space: The AI Engineer Podcast by swyx + Alessio

Latent Space: The AI Engineer Podcast

75 Listeners

"Upstream" with Erik Torenberg by Erik Torenberg

"Upstream" with Erik Torenberg

62 Listeners

"Econ 102" with Noah Smith and Erik Torenberg by Turpentine

"Econ 102" with Noah Smith and Erik Torenberg

145 Listeners

The Studies Show by Tom Chivers and Stuart Ritchie

The Studies Show

60 Listeners

Complex Systems with Patrick McKenzie (patio11) by Patrick McKenzie

Complex Systems with Patrick McKenzie (patio11)

115 Listeners

The Marginal Revolution Podcast by Mercatus Center at George Mason University

The Marginal Revolution Podcast

89 Listeners