I’m not 100% convinced of this, but I’m fairly convinced, more and more so over time. I’m hoping to start a vigorous but civilized debate. I invite you to attack my weak points and/or present counter-evidence.
My thesis is that intent-alignment is basically happening, based on evidence from the alignment research in the LLM era.
Introduction
The classic story about loss of control from AI is that optimization pressure on proxies will cause the AI to value things that humans don’t. (Relatedly, the AI might become a mesa-optimizer with an arbitrary goal).
But the reality that I observe is that the AIs are really nice and somewhat naive. They’re like the world's smartest 12-year-old (h/t Jenn). We put more and more RL optimization pressure, and keep getting smarter and smarter models; but they just get better at following developer intent (which most of the time, but not always, includes user intent)
Honesty and goodness in models
It's really difficult to get AIs to be dishonest or evil by prompting, you have to fine-tune them. The only scenarios of model dishonesty that we have, make it kind of clear that you should be lying. Consider the first example from Apollo's [...]
---
Outline:
(00:31) Introduction
(01:10) Honesty and goodness in models
(03:38) Mitigating dishonesty with probes
(04:16) Jailbreaks
(06:35) Reward hacking
(07:26) Sharp left turn? Capabilities still too low?
(08:32) Discussion. Alignment by default.
(09:49) What's next?
The original text contained 1 footnote which was omitted from this narration.
---