Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Eliezer Yudkowsky’s Letter in Time Magazine, published by Zvi on April 5, 2023 on LessWrong.
FLI put out an open letter, calling for a 6 month pause in training models more powerful than GPT-4, followed by additional precautionary steps.
Then Eliezer Yudkowsky put out a post in Time, which made it clear he did not think that letter went far enough. Eliezer instead suggests an international ban on large AI training runs to limit future capabilities advances. He lays out in stark terms our choice as he sees it: Either do what it takes to prevent such runs or face doom.
A lot of good discussions happened. A lot of people got exposed to the situation that would not have otherwise been exposed to it, all the way to a question being asked at the White House press briefing. Also, due to a combination of the internet being the internet, the nature of the topic and the way certain details were laid out, a lot of other discussion predictably went off the rails quickly.
If you have not yet read the post itself, I encourage you to read the whole thing, now, before proceeding. I will summarize my reading in the next section, then discuss reactions.
This post goes over:
What the Letter Actually Says. Check if your interpretation matches.
The Internet Mostly Sidesteps the Important Questions. Many did not take kindly.
What is a Call for Violence? Political power comes from the barrel of a gun.
Our Words Are Backed by Nuclear Weapons. Eliezer did not propose using nukes.
Answering Hypothetical Questions. If he doesn’t he loses all his magic powers.
What Do I Think About Yudkowsky’s Model of AI Risk? I am less confident.
What Do I Think About Eliezer’s Proposal? Depends what you believe about risk.
What Do I Think About Eliezer’s Answers and Comms Strategies? Good question.
What the Letter Actually Says
I see this letter as a very clear, direct, well-written explanation of what Eliezer Yudkowsky actually believes will happen, which is that AI will literally kill everyone on Earth, and none of our children will get to grow up – unless action is taken to prevent it.
Eliezer also believes that the only known way that our children will grow up is if we get our collective acts together, and take actions that prevent sufficiently large and powerful AI training runs from happening.
Either you are willing to do what it takes to prevent that development, or you are not.
The only known way to do that would be governments restricting and tracking GPUs and GPU clusters, including limits on GPU manufacturing and exports, as large quantities of GPUs are required for training.
That requires an international agreement to restrict and track GPUs and GPU clusters. There can be no exceptions. Like any agreement, this would require doing what it takes to enforce the agreement, including if necessary the use of force to physically prevent unacceptably large GPU clusters from existing.
We have to target training rather than deployment, because deployment does not offer any bottlenecks that we can target.
If we allow corporate AI model development and training to continue, Eliezer sees no chance there will be enough time to figure out how to have the resulting AIs not kill us. Solutions are possible, but finding them will take decades. The current cavalier willingness by corporations to gamble with all of our lives as quickly as possible would render efforts to find solutions that actually work all but impossible.
Without a solution, if we move forward, we all die.
How would we die? The example given of how this would happen is using recombinant DNA to bootstrap to post-biological molecular manufacturing. The details are not load bearing.
These are draconian actions that come with a very high price. We would be sacrificing highly valuable technological capabilities, and risking deadly confrontations. These...