
Sign up to save your podcasts
Or
In this episode, I chat with Samuel Albanie about the Google DeepMind paper he co-authored called "An Approach to Technical AGI Safety and Security". It covers the assumptions made by the approach, as well as the types of mitigations it outlines.
Patreon: https://www.patreon.com/axrpodcast
Ko-fi: https://ko-fi.com/axrpodcast
Transcript: https://axrp.net/episode/2025/07/06/episode-45-samuel-albanie-deepminds-agi-safety-approach.html
Topics we discuss, and timestamps:
0:00:37 DeepMind's Approach to Technical AGI Safety and Security
0:04:29 Current paradigm continuation
0:19:13 No human ceiling
0:21:22 Uncertain timelines
0:23:36 Approximate continuity and the potential for accelerating capability improvement
0:34:29 Misuse and misalignment
0:39:34 Societal readiness
0:43:58 Misuse mitigations
0:52:57 Misalignment mitigations
1:05:20 Samuel's thinking about technical AGI safety
1:14:02 Following Samuel's work
Samuel on Twitter/X: x.com/samuelalbanie
Research we discuss:
An Approach to Technical AGI Safety and Security: https://arxiv.org/abs/2504.01849
Levels of AGI for Operationalizing Progress on the Path to AGI: https://arxiv.org/abs/2311.02462
The Checklist: What Succeeding at AI Safety Will Involve: https://sleepinyourhat.github.io/checklist/
Measuring AI Ability to Complete Long Tasks: https://arxiv.org/abs/2503.14499
Episode art by Hamish Doodles: hamishdoodles.com
4.4
88 ratings
In this episode, I chat with Samuel Albanie about the Google DeepMind paper he co-authored called "An Approach to Technical AGI Safety and Security". It covers the assumptions made by the approach, as well as the types of mitigations it outlines.
Patreon: https://www.patreon.com/axrpodcast
Ko-fi: https://ko-fi.com/axrpodcast
Transcript: https://axrp.net/episode/2025/07/06/episode-45-samuel-albanie-deepminds-agi-safety-approach.html
Topics we discuss, and timestamps:
0:00:37 DeepMind's Approach to Technical AGI Safety and Security
0:04:29 Current paradigm continuation
0:19:13 No human ceiling
0:21:22 Uncertain timelines
0:23:36 Approximate continuity and the potential for accelerating capability improvement
0:34:29 Misuse and misalignment
0:39:34 Societal readiness
0:43:58 Misuse mitigations
0:52:57 Misalignment mitigations
1:05:20 Samuel's thinking about technical AGI safety
1:14:02 Following Samuel's work
Samuel on Twitter/X: x.com/samuelalbanie
Research we discuss:
An Approach to Technical AGI Safety and Security: https://arxiv.org/abs/2504.01849
Levels of AGI for Operationalizing Progress on the Path to AGI: https://arxiv.org/abs/2311.02462
The Checklist: What Succeeding at AI Safety Will Involve: https://sleepinyourhat.github.io/checklist/
Measuring AI Ability to Complete Long Tasks: https://arxiv.org/abs/2503.14499
Episode art by Hamish Doodles: hamishdoodles.com
26,370 Listeners
2,408 Listeners
1,865 Listeners
294 Listeners
107 Listeners
4,122 Listeners
90 Listeners
298 Listeners
91 Listeners
427 Listeners
252 Listeners
87 Listeners
60 Listeners
144 Listeners
124 Listeners