Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI: Practical Advice for the Worried, published by Zvi on March 1, 2023 on LessWrong.
Some people (although very far from all people) are worried that AI will wipe out all value in the universe.
Some people, including some of those same people, need practical advice.
A Word On Thinking For Yourself
There are good reasons to worry about AI. This includes good reasons to worry about AI wiping out all value in the universe, or AI killing everyone, or other similar very bad outcomes.
There are also good reasons that AGI, or otherwise transformational AI, might not come to pass for a long time.
As I say in the Q&A section later, I do not consider imminent transformational AI inevitable in our lifetimes: Some combination of ‘we run out of training data and ways to improve the systems, and AI systems max out at not that much more powerful than current ones’ and ‘turns out there are regulatory and other barriers that prevent AI from impacting that much of life or the economy that much’ could mean that things during our lifetimes turn out to be not that strange. These are definitely world types my model says you should consider plausible.
There is also the highly disputed question of how likely it is that if we did create an AGI reasonably soon, it would wipe out all value in the universe. There are what I consider very good arguments that this is what happens unless we solve extremely difficult problems to prevent it, and that we are unlikely to solve those problems in time. Thus I believe this is very likely, although there are some (such as Eliezer Yudkowsky) who consider it more likely still.
That does not mean you should adapt my position, or anyone else’s position, or mostly use social cognition from those around you, on such questions, no matter what those methods would tell you. If this is something that is going to impact your major life decisions, or keep you up at night, you need to develop your own understanding and model, and decide for yourself what you predict.
Reacting Properly To Such Information is Hard
People who do react by worrying about such AI outcomes are rarely reacting about right given their beliefs. Calibration is hard.
Many effectively suppress this info, cutting the new information about the future off from the rest of their brain. They live their lives as if such risks do not exist.
There are much worse options than this. It has its advantages. It leaves value on the table, both personally and for the world. In exchange, one avoids major negative outcomes that potentially include things like missing out on the important things in life, ruining one’s financial future and bouts of existential despair.
Also the risk of doing ill-advised counterproductive things in the name of helping with the problem.
Remember that the default outcome of those working in AI in order to help is to end up working primarily on capabilities, and making the situation worse.
That does not mean that you should not make any attempt to improve our chances. It does mean that you should consider your actions carefully when doing so, and the possibility that you are fooling yourself. Remember that you are the easiest person to fool.
While some ignore the issue, others, in various ways, dramatically overreact.
I am going to step up here, and dare to answer these, those added by Twitter and some raised recently in personal conversations.
Before I begin, it must be said: NONE OF THIS IS INVESTMENT ADVICE.
Overview
There is some probability that humanity will create transformational AI soon, for various definitions of soon. You can and should decide what you think that probability is, and conditional on that happening, your probability of various outcomes.
Many of these outcomes, both good and bad, will radically alter the payoffs of various life decisions you might make now. Some...