Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Response to Tyler Cowen’s Existential risk, AI, and the inevitable turn in human history, published by Zvi on March 28, 2023 on LessWrong.
Predictions are hard, especially about the future. On this we can all agree.
Tyler Cowen offers a post worth reading in full in which he outlines his thinking about AI and what is likely to happen in the future. I see this as essentially the application of Stubborn Attachments and its radical agnosticism to the question of AI. I see the logic in applying this to short-term AI developments the same way I would apply it to almost all historic or current technological progress. But I would not apply it to AI that passes sufficient capabilities and intelligence thresholds, which I see as fundamentally different.
I also notice a kind of presumption that things in most scenarios will work out and that doom is dependent on particular ‘distant possibilities,’ that often have many logical dependencies or require a lot of things to individually go as predicted. Whereas I would say that those possibilities are not so distant or unlikely, but more importantly that the result is robust, that once the intelligence and optimization pressure that matters is no longer human that most of the outcomes are existentially bad by my values and that one can reject or ignore many or most of the detail assumptions and still see this.
My approach is, I’ll respond in-line to Tyler’s post, then there is a conclusion section will summarize the disagreements.
In several of my books and many of my talks, I take great care to spell out just how special recent times have been, for most Americans at least. For my entire life, and a bit more, there have been two essential features of the basic landscape:
1. American hegemony over much of the world, and relative physical safety for Americans.
2. An absence of truly radical technological change.
I notice I am still confused about ‘truly radical technological change’ when in my lifetime we went from rotary landline phones, no internet and almost no computers to a world in which most of what I and most people I know do all day involves their phones, internet and computers. How much of human history involves faster technological change than the last 50 years?
When I look at AI, however, I strongly agree that what we have experienced is not going to prepare us for what is coming, even in the most slow and incremental plausible futures that don’t involve any takeoffs or existential risks. AI will be a very different order of magnitude of speed, even if we otherwise stand still.
Unless you are very old, old enough to have taken in some of WWII, or were drafted into Korea or Vietnam, probably those features describe your entire life as well.
In other words, virtually all of us have been living in a bubble “outside of history.”
Now, circa 2023, at least one of those assumptions is going to unravel, namely #2. AI represents a truly major, transformational technological advance. Biomedicine might too, but for this post I’ll stick to the AI topic, as I wish to consider existential risk.
#1 might unravel soon as well, depending how Ukraine and Taiwan fare. It is fair to say we don’t know, nonetheless #1 also is under increasing strain.
The relative physical safety we enjoy, as I see it, mostly has nothing to do with American hegemony, and everything to do with other advances, and with the absurd trade-offs we have made in the name of physical safety, to the point of letting it ruin our ability to live life and our society’s ability to do things.
When there is an exception, as there recently was, we do not handle it well.
Have we already forgotten March of 2020? How many times in history has life undergone that rapid and huge a transformation? According to GPT-4, the answer is zero. It names The Black Death, Industrial Revolution ...