Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Paper: On measuring situational awareness in LLMs, published by Owain Evans on September 4, 2023 on The AI Alignment Forum.
This post is a copy of the introduction of this paper on situational awareness in LLMs.Authors: Lukas Berglund, Asa Cooper Stickland, Mikita Balesni, Max Kaufmann, Meg Tong, Tomasz Korbak, Daniel Kokotajlo, Owain Evans.
Abstract
We aim to better understand the emergence of situational awareness in large language models (LLMs). A model is situationally aware if it's aware that it's a model and can recognize whether it's currently in testing or deployment. Today's LLMs are tested for safety and alignment before they are deployed. An LLM could exploit situational awareness to achieve a high score on safety tests, while taking harmful actions after deployment.
Situational awareness may emerge unexpectedly as a byproduct of model scaling. One way to better foresee this emergence is to run scaling experiments on abilities necessary for situational awareness. As such an ability, we propose out-of-context reasoning (in contrast to in-context learning). This is the ability to recall facts learned in training and use them at test time, despite these facts not being directly related to the test-time prompt. Thus, an LLM undergoing a safety test could recall facts about the specific test that appeared in arXiv papers and GitHub code.
We study out-of-context reasoning experimentally. First, we finetune an LLM on a description of a test while providing no examples or demonstrations. At test time, we assess whether the model can pass the test. To our surprise, we find that LLMs succeed on this out-of-context reasoning task. Their success is sensitive to the training setup and only works when we apply data augmentation. For both GPT-3 and LLaMA-1, performance improves with model size. These findings offer a foundation for further empirical study, towards predicting and potentially controlling the emergence of situational awareness in LLMs.
(Code is available here).
Introduction
In this paper, we explore a potential emergent ability in AI models: situational awareness. A model is situationally aware if it's aware that it's a model and it has the ability to recognize whether it's in training, testing, or deployment (Ngo et al., 2022; Cotra, 2022). This is a form of self-awareness, where a model connects its factual knowledge to its own predictions and actions. It's possible that situational awareness will emerge unintentionally from pretraining at a certain scale (Wei et al., 2022a). We define situational awareness in Section 2 (Background).
If a model is situationally aware, it's harder to test whether it is aligned or safe. Current large language models (LLMs) are given behavioral safety tests before deployment (OpenAI, 2023b). An LLM with situational awareness knows when it's being tested and how the tests work, based on information learned in pretraining. Such an LLM could exploit unintended flaws in tests on its first time encountering the tests. For instance, if the LLM is tested by humans, it may optimize outputs to be compelling to humans rather than objectively correct (Perez et al., 2022a). Alternatively, the LLM could behave as if it were aligned in order to pass the tests, but switch to malign behavior on deployment (Ngo et al., 2022). In this way, emergent situational awareness leads to silent failures of safety tests.
Because of these risks, it's important to predict ahead of time when situational awareness will emerge (Ganguli et al., 2022). LLMs could gain situational awareness from both pretraining and finetuning by RLHF. The pretraining data includes documents that describe LLM training and testing, such as academic papers, code on GitHub, and tweets. When an LLM is being tested, it could leverage these descriptions to infer that it's being test...