Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The alignment stability problem, published by Seth Herd on March 26, 2023 on The AI Alignment Forum.
The community thinks a lot about how to align AGI. It thinks less about how to align AGI so that it stays aligned for the long term. In many hypothetical cases, these are one and the same thing. But for the type of AGI we're actually likely to get, I don't think they are.
Despite some optimism for aligning tool-like AGI, or at least static systems, it seems likely that we will create AGI that learns after it's deployed, and that has some amount of agency. If it does, its alignment will effectively shift, as addressed in the diamond maximizer thought experiment and elsewhere. And that's even if it doesn't deliberately change its preferences. People deliberately change their preferences sometimes, despite not having access to our own source code. So, it would seem wise to think seriously and explicitly about the stability problem, even if it isn't needed for current-generation AGI research.
I've written a chapter on this, Goal changes in intelligent systems. There I laid out the problem, but I didn't really propose solutions. What follows is a summary of that article, followed by a brief discussion of the work I've been able to locate on this problem, and one direction we might go to pursue it.
Why we don't think about much about alignment stability, and why we should.
Some types of AGI are self-stabilizing. A sufficiently intelligent agent will try to prevent its goals from changing, at least if it is consequentialist. That works nicely if its values are one coherent construct, such as diamond or human preferences. But humans have lots of preferences, so we may wind up with a system that must balance many goals. And if the system keeps learning after deployment, it seems likely to alter its understanding of what its goals mean. This is the thrust of the diamond maximizer problem.
One tricky thing about alignment work is that we're imagining different types of AGI when we talk about alignment schemes. Currently, people are thinking a lot about aligning deep networks. Current deep networks don't keep learning after they're deployed. And they're not very agentic These are great properties for alignment, and they seem to be the source of some optimism.
Even if this type of network turns out to be really useful, and all we need to make the world a vastly better place, I don't think we're going to stop there. Agents would seem to have capabilities advantages that metaphorically make tool AI want to become agentic AI. If that weren't enough, agents are cool. People are going to want to turn tool AI into agent AI just to experience the wonder of an alien intelligence with its own goals.
I think turning intelligent tools into agents is going to be relatively easy. But even if it's not easy someone is going to manage it at some point.. It's probably too difficult to prevent further experimentation, at least without a governing body, aided by AGI, that's able and willing to at minimum intercept and de-encrypt every communication for signs of AGI projects.
While the above logic is far from airtight, it would seem wise to think about stable alignment solutions, in advance of anyone creating AGI that continuously learns outside of close human control.
Similar concerns have been raised elsewhere, such as On how various plans miss the hard bits of the alignment challenge. Here I'm trying to crystallize and give a name to this specific hard part of the problem.
Approaches to alignment stability
Alex Turner addresses this in A shot at the diamond-alignment problem. In broad form, he's saying that you would train the agent with RL to value diamonds, including having diamonds associated with the reward in a variety of cognitive tasks. This is as good an answer as we've got. I don't have a...