Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.
Owain Evans is an AI Alignment researcher, research associate at the Center of Human Compatible AI at UC Berkeley, and now leading a new AI safety research group.
In this episode we discuss two of his recent papers, “Me, Myself, and AI: The Situational Awareness Dataset (SAD) for LLMs” (LW) and “Connecting the Dots: LLMs can Infer and Verbalize Latent Structure from Disparate Training Data” (LW), alongside some Twitter questions.
Below are some highlighted quotes from our conversation (available on Youtube, Spotify, Apple Podcast). For the full context for each of these quotes, you can find the accompanying transcript.
Situational Awareness
Figure 1 from Me, Myself, and AI: The Situational Awareness Dataset (SAD) for LLMs
Definition
"What is situational awareness? The idea is the model's kind of self-awareness, that is its knowledge of its own identity, and then [...]
---
Outline:
(00:54) Situational Awareness
(01:09) Definition
(01:56) Motivation
(02:33) On Claude 3 Opus Insightful Answers
(03:46) What Would Saturating The Situational Awareness Benchmark Imply For Safety And Governance
(04:41) Out-of-context reasoning
(04:55) Definition
(05:20) Experimental Setup
(05:51) Difference With In-Context Learning
(06:29) Safety implications
(07:00) The Results Were Surprising
(07:23) Alignment Research Advice
(07:27) Owains research process
(07:59) Interplay between theory and practice
(08:33) Research style and background
(09:08) On Research Rigor
(09:36) On Accelerating AI capabilities
(10:07) On balancing safety benefits with potential risks
(10:38) On the reception of his work
---