Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A plea for solutionism on AI safety, published by jasoncrawford on June 9, 2023 on LessWrong.
[Note for LW: This essay was written mostly for people who are inclined to downplay or dismiss AI risk, which means it has almost no intersection with this audience. Cross-posting it here anyway for feedback and in case you want to point anyone to it.]
Will AI kill us all?
This question has rapidly gone mainstream. A few months ago, it wasn’t seriously debated very far outside the rationalist community of LessWrong; now it’s reported in major media outlets including the NY Times, The Guardian, the Times of London, BBC, WIRED, Time, Fortune, U.S. News, and CNBC.
For years, the rationalists lamented that the world was neglecting the existential risk from AI, and despaired of ever convincing the mainstream of the danger. But it turns out, of course, that our culture is fully prepared to believe that technology can be dangerous. The reason AI fears didn’t go mainstream earlier wasn’t society’s optimism, but its pessimism: most people didn’t believe AI would actually work. Once there was a working demo that got sufficient publicity, it took virtually no extra convincing to get people to be worried about it.
As usual, the AI safety issue is splitting people into two camps. One is pessimistic, often to the point of fatalism or defeatism: emphasizing dangers, ignoring or downplaying benefits, calling for progress to slow or stop, and demanding regulation. The other is optimistic, often to the point of complacency: dismissing the risks, and downplaying the need for safety.
If you’re in favor of technology and progress, it is natural to react to fears of AI doom with worry, anger, or disgust. It smacks of techno-pessimism, and it could easily lead to draconian regulations that kill this technology or drastically slow it down, depriving us all of its potentially massive benefits. And so it is tempting to line up with the techno-optimists, and to focus primarily on arguing against the predictions of doom. If you feel that way, this essay is for you.
I am making a plea for solutionism on AI safety. The best path forward, both for humanity and for the political battle, is to acknowledge the risks, help to identify them, and come up with a plan to solve them. How do we develop safe AI? And how do we develop AI safely?
Let me explain why I think this makes sense even for those of us who strongly believe in progress, and secondarily why I think it’s needed in the current political environment.
Safety is a part of progress
Humanity inherited a dangerous world. We have never known safety: fire, flood, plague, famine, wind and storm, war and violence, and the like have always been with us. Mortality rates are high as far back as we can measure them. Not only was death common, it was sudden and unpredictable. A shipwreck, a bout of malaria, or a mining accident could kill you quickly, at any age.
Over the last few centuries, technology has helped make our lives more comfortable and safer. But it also created new risks: boiler explosions, factory accidents, car and plane crashes, toxic chemicals, radiation.
When we think of the history of progress and the benefits it has brought, we should think not only of wealth measured in economic production. We should think also of the increase in health and safety.
Safety is an achievement. It is an accomplishment of progress—a triumph of reason, science, and institutions. Like the other accomplishments of progress, we should be proud of it—and we should be unsatisfied if we stall out at our current level. We should be restlessly striving for more. A world in which we continue to make progress should be not only a wealthier world, but a safer world.
We should (continue to) get more proactive about safety
Long ago, in a more laissez-faire world, technology c...