
Sign up to save your podcasts
Or


In this episode, I chat with Samuel Albanie about the Google DeepMind paper he co-authored called "An Approach to Technical AGI Safety and Security". It covers the assumptions made by the approach, as well as the types of mitigations it outlines.
Patreon: https://www.patreon.com/axrpodcast
Ko-fi: https://ko-fi.com/axrpodcast
Transcript: https://axrp.net/episode/2025/07/06/episode-45-samuel-albanie-deepminds-agi-safety-approach.html
Topics we discuss, and timestamps:
0:00:37 DeepMind's Approach to Technical AGI Safety and Security
0:04:29 Current paradigm continuation
0:19:13 No human ceiling
0:21:22 Uncertain timelines
0:23:36 Approximate continuity and the potential for accelerating capability improvement
0:34:29 Misuse and misalignment
0:39:34 Societal readiness
0:43:58 Misuse mitigations
0:52:57 Misalignment mitigations
1:05:20 Samuel's thinking about technical AGI safety
1:14:02 Following Samuel's work
Samuel on Twitter/X: x.com/samuelalbanie
Research we discuss:
An Approach to Technical AGI Safety and Security: https://arxiv.org/abs/2504.01849
Levels of AGI for Operationalizing Progress on the Path to AGI: https://arxiv.org/abs/2311.02462
The Checklist: What Succeeding at AI Safety Will Involve: https://sleepinyourhat.github.io/checklist/
Measuring AI Ability to Complete Long Tasks: https://arxiv.org/abs/2503.14499
Episode art by Hamish Doodles: hamishdoodles.com
By Daniel Filan4.4
88 ratings
In this episode, I chat with Samuel Albanie about the Google DeepMind paper he co-authored called "An Approach to Technical AGI Safety and Security". It covers the assumptions made by the approach, as well as the types of mitigations it outlines.
Patreon: https://www.patreon.com/axrpodcast
Ko-fi: https://ko-fi.com/axrpodcast
Transcript: https://axrp.net/episode/2025/07/06/episode-45-samuel-albanie-deepminds-agi-safety-approach.html
Topics we discuss, and timestamps:
0:00:37 DeepMind's Approach to Technical AGI Safety and Security
0:04:29 Current paradigm continuation
0:19:13 No human ceiling
0:21:22 Uncertain timelines
0:23:36 Approximate continuity and the potential for accelerating capability improvement
0:34:29 Misuse and misalignment
0:39:34 Societal readiness
0:43:58 Misuse mitigations
0:52:57 Misalignment mitigations
1:05:20 Samuel's thinking about technical AGI safety
1:14:02 Following Samuel's work
Samuel on Twitter/X: x.com/samuelalbanie
Research we discuss:
An Approach to Technical AGI Safety and Security: https://arxiv.org/abs/2504.01849
Levels of AGI for Operationalizing Progress on the Path to AGI: https://arxiv.org/abs/2311.02462
The Checklist: What Succeeding at AI Safety Will Involve: https://sleepinyourhat.github.io/checklist/
Measuring AI Ability to Complete Long Tasks: https://arxiv.org/abs/2503.14499
Episode art by Hamish Doodles: hamishdoodles.com

26,346 Listeners

2,443 Listeners

1,098 Listeners

107 Listeners

112,942 Listeners

209 Listeners

9,948 Listeners

95 Listeners

502 Listeners

5,489 Listeners

140 Listeners

16,098 Listeners

94 Listeners

209 Listeners

133 Listeners