The Nonlinear Library

LW - A Hypothetical Takeover Scenario Twitter Poll by Zvi


Listen Later

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A Hypothetical Takeover Scenario Twitter Poll, published by Zvi on April 24, 2023 on LessWrong.
I ran an experimental poll sequence on Twitter a few weeks back, on the subject of what would happen if a hostile ASI (Artificial Superintelligence) tried to take over the world and kill all humans, using only ordinary known techs.
The hope was that by exploring this, it would become more clear where people’s true objections and cruxes were. Surely, at least, one could establish that once the AI was fully an agent, fully uncontrolled, had a fully hostile goal, was definitively smarter and more capable than us and could scale, that we were obviously super dead? Or, if not, one could at least figure out what wrong intuitions were blocking that realization, and figure out what to do about that.
This post is an experiment as well. I wouldn’t say the results were a full success. I would say they were interesting enough that they’re worth writing up, if only as an illustration of how I am thinking about some of these questions and potential rhetorical strategies. If you’re not interested in that, it is safe to skip this one.
Oh No
Consider the following scenario:
There come into existence an ASI (Artificial Super-Intelligence), defined here roughly as an AI system that is generically smarter than us, has all the cognitive capacities and skills any human has displayed on the internet, can operate orders of magnitude faster than us, and that can copy itself, and which is on the internet with the ability to interact and gather resources.
That ASI decides to do something that would kill us – it wants a configuration of atoms in the universe that doesn’t involve any humans being alive. The AI does not love you, the AI does not hate you, you are composed of atoms it can use for something else and the AI’s goal is something else. Or maybe it does want everyone dead, who knows, that is also possible.
The ASI kills us.
Many people, for a variety of reasons many of which are good, do not expect step one to happen any time soon, or do not expect step 2 to happen any time soon conditional on step 1 happening.
This post is not about how likely it is that steps 1 and 2 will happen. It is about:
Q: If steps 1 and 2 do happen – oh no! – how likely is it under various scenarios that step 3 happens, that we all die?
A remarkably common answer to this question is no, we will be fine.
I don’t understand this. To me you have so obviously already lost once:
The ASI has is loose on the internet.
The ASI has access to enough resources that it can use them to get more.
The ASI is much faster than you and is smarter than you.
The ASI wants you to lose.
It also seems obvious to me that such an AI would, if it desired to do so, in such a scenario, be able to kill all the people.
Yet people demand to know exactly how it would do this, or they won’t believe it.
Level the Playing Field a Bit
By default, in such a scenario, you should expect to lose, lose fast and lose hard, due to the ASI that is smarter and more capable than you doing some combination of
Making itself even smarter and more capable repeatedly until it is God-like AI (RSI: recursive self-improvement).
The ASI doing things that you didn’t know how to physically do at all because it understands physics better than you (nanotech, synthetic biology, engineered viruses, new chip designs, or perhaps something you didn’t even conceive of or think was physically possible) that upend the whole game board.
The ASI using super-advanced manipulation techniques like finding signals that directly hijack human brains or motor functions, or new highly effective undetectable brainwashing techniques, or anything like that.
The general version of this, something you were not smart enough to figure out.
Presumably, we can all agree that if one of these does indee...
...more
View all episodesView all episodes
Download on the App Store

The Nonlinear LibraryBy The Nonlinear Fund

  • 4.6
  • 4.6
  • 4.6
  • 4.6
  • 4.6

4.6

8 ratings