Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Unpicking Extinction, published by ukc10014 on December 10, 2023 on LessWrong.
TL;DR
Human extinction is trending: there has been a lot of noise, mainly on X, about the apparent complacency amongst e/acc with respect to human extinction. Extinction also feels adjacent to another view (not particular to e/acc) that 'the next step in human evolution is {AI/AGI/ASI}'. Many have pushed back robustly against the former, while the latter doesn't seem very fleshed out. I thought it useful to, briefly, gather the various positions and summarise them, hopefully not too inaccurately, and perhaps pull out some points of convergence.
This is a starting point for my own research (on de-facto extinction via evolution). There is nothing particularly new in here: see the substantial literature in the usual fora for instance. Thomas Moynihan's X-risk (2020) documents the history of humanity's collective realisation of civilisational fragility, while Émile P. Torres' works (discussed below) set out a possible framework for an ethics of extinction.
My bottom line is: a) the degree of badness (or goodness) of human extinction seems less obvious or self-evident than one might assume, b) what we leave behind if and when we go extinct matters, c) the timing of when this happens is important, as is d) the manner in which the last human generations live (and die).
Relevant to the seeming e/acc take (i.e. being pretty relaxed about possible human extinction): it seems clear that our default position (subject to some caveats) should be to delay extinction on the grounds that a) it is irreversible (by definition), and b) so as to maximise our option value over the future. In any case, the e/acc view, which seems based on something (not very articulate) something entropy crossed with a taste for unfettered capitalism, is hard to take seriously and might even fail on its own terms.
Varieties of extinction
The Yudkowsky position
(My take on) Eliezer's view is that he fears a misaligned AI (not necessarily superintelligent), acting largely on its own (e.g. goal-formation, planning, actually effecting things in the world), eliminates humans and perhaps all life on Earth.
This would be bad, not just for the eliminated humans or their descendants, but also for the universe-at-large in the sense that intelligently-created complexity (of the type that humans generate) is an intrinsic good that requires no further justification. The vast majority of AI designs that Eliezer foresees would, through various chains of events, result in a universe with much less of these intrinsic goods.
He spells it out here in the current e/acc context, and clarifies that his view doesn't hinge on the preservation of biological humans (this was useful to know). He has written copiously and aphoristically on this topic, for instance Value is Fragile and the Fun Theory sequence.
The Bostrom variant
Nick Bostrom's views on human extinction seem to take a more-happy-lives-are-better starting point. My possibly mistaken impression is that, like Eliezer, he seems to value things like art, creativity, love, in the specific sense that a future where they didn't exist would be a much worse one from a cosmic or species-neutral perspective.
He describes an 'uninhabited society' that is technologically advanced and builds complex structures, but that 'nevertheless lacks any type of being that is conscious or whose welfare has moral significance' (Chapter 11, p. 173 of Superintelligence (2014)). To my knowledge, he doesn't unpick what precisely about the uninhabited society would actually be bad and for whom (possibly this is well-understood point or a non-question in philosophy, but I'm not sure that is the case, at least judging from (see below) Benatar, Torres, this paper by James Lenman, or for that matter Schopenhauer).
A more tang...