The Nonlinear Library

EA - Two tools for rethinking existential risk by Arepo


Listen Later

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Two tools for rethinking existential risk, published by Arepo on April 5, 2024 on The Effective Altruism Forum.
Acknowledgements
I owe thanks to Siao Si Looi, Derek Shiller, Nuño Sempere, Rudolf Ordoyne, Arvo Muñoz Morán, Justis Mills, David Manheim, Christopher Lankhof, Mohammad Ismam Huda, Uli Alskelung Von Hornbol, John Halstead, Charlie Guthmann, Vasco Grilo, Nia Jane Gardner, Michael Dickens, David Denkenberger and Agustín Covarrubias for invaluable comments and discussion on this post, the code and/or the project as a whole. Any remaining mistakes, whether existentially terminal or merely catastrophic, are all mine.
Tl;dr
I've developed two calculators designed to help longtermists estimate the likelihood of humanity achieving a secure interstellar existence after 0 or more major catastrophes. These can be used to compare an a priori estimate, and a revised estimate after counterfactual events.
I hope these calculators will allow better prioritisation among longtermists and will finally give a common currency to longtermists, collapsologists and totalising consequentialists who favour non-longtermism. This will give these groups more scope for resolving disagreements and perhaps finding moral trades.
This post explains how to use the calculators, and how to interpret their results.
Introduction
I argued earlier in this sequence that the classic concept of 'existential risk' is much too reductive. In short, by classing an event as either an existential catastrophe or not, it forces categorical reasoning onto fundamentally scalar questions of probability/credence.
As longtermists, we are supposed to focus on achieving some kind of utopic future, in which morally valuable life would inhabit much of the Virgo supercluster for billions if not trillions of years.[1] So ultimately, rather than asking whether an event will destroy '(the vast majority of) humanity's long-term potential', we should ask various related but distinct questions:
Contraction/expansion-related: What effect does the event have on the expected size of future civilisation? In practice we usually simplify this to the question of whether or not distant future civilisation will exist:
Existential security-related: What is the probability[2] that human descendants (or whatever class of life we think has value) will eventually become interstellar? But this is still a combination of two questions, the latter of which longtermists have never, to my knowledge, considered probabilistically:[3]
What is the probability that the event kills all living humans?
What effect does the event otherwise have on the probability that we eventually reach an interstellar/existentially secure state, [4] given the possibility of multiple civilisational collapses and 'reboots'? (where the first reboot is the second civilisation)
Welfare-related: How well off (according to whatever axiology one thinks best) would such life be?
Reboot 1, maybe
Image credit to Yuri Shwedoff
In the last two posts I described models for longtermists to think about both elements of the existential security-related question together.[5] These fell into two groups:
a simple model of civilisational states, which treats every civilisation as having equivalent prospects to its predecessors at an equivalent technological level,
a family of more comprehensive models of civilisational states that a) capture my intuitions about how our survival prospects might change across multiple possible civilisations, b) have parameters which tie to estimates in existing existential-research literature (for example, the estimates of risk of per year and per century described in Michael Aird's Database of Existential Risk estimates (or similar)) and c) allow enough precision to consider catastrophes that 'only' set us back arbitrarily
small amounts of time.
Since the...
...more
View all episodesView all episodes
Download on the App Store

The Nonlinear LibraryBy The Nonlinear Fund

  • 4.6
  • 4.6
  • 4.6
  • 4.6
  • 4.6

4.6

8 ratings