Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On the FLI Open Letter, published by Zvi on March 30, 2023 on LessWrong.
The Future of Life Institute (FLI) recently put out an open letter, calling on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.
There was a great flurry of responses, across the spectrum. Many were for it. Many others were against it. Some said they signed, some said they decided not to. Some gave reasons, some did not. Some expressed concerns it would do harm, some said it would do nothing. There were some concerns about fake signatures, leading to a pause while that was addressed, which might have been related to the letter being released slightly earlier than intended.
Eliezer Yudkowsky put out quite the letter in Time magazine. In it, he says the FLI letter discussed in this post is a step in the right direction and he is glad people are signing it, but he will not sign because he does not think it goes far enough, that a 6 month pause is woefully insufficient, and he calls for. a lot more. I will address that letter more in a future post. I’m choosing to do this one first for speed premium. As much as the world is trying to stop us from saying it these days. one thing at a time.
The call is getting serious play. Here is Fox News, saying ‘Democrats and Republicans coalesce around calls to regulate AI development: ‘Congress has to engage.’
As per the position he staked out a few days prior and that I respond to here, Tyler Cowen is very opposed to a pause, and wasted no time amplifying every voice available in the opposing camp, handily ensuring I did not miss any.
Structure of this post is:
I Wrote a Letter to the Postman: Reproduces the letter in full.
You Know Those are Different, Right?: Conflation of x-risk vs. safety.
The Six Month Pause: What it can and can’t do.
Engage Safety Protocols: What would be real protocols?
Burden of Proof: The letter’s threshold for approval seems hard to meet.
New Regulatory Authority: The call for one.
Overall Take: I am net happy about the letter.
Some People in Favor: A selection.
Some People in Opposition: Including their reasoning, and complication of the top arguments, some of which seem good, some of which seem not so good.
Conclusion: Summary and reminder about speed premium conditions.
I Wrote a Letter to the Postman
First off, let’s read the letter. It’s short, so what the hell, let’s quote the whole thing.
AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research[1] and acknowledged by top AI labs.[2] As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.
Contemporary AI systems are now becoming human-competitive at general tasks,[3] and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system’s potential effects.
OpenAI’s recent stateme...