
Sign up to save your podcasts
Or
In this episode, I chat with Samuel Albanie about the Google DeepMind paper he co-authored called "An Approach to Technical AGI Safety and Security". It covers the assumptions made by the approach, as well as the types of mitigations it outlines.
Patreon: https://www.patreon.com/axrpodcast
Ko-fi: https://ko-fi.com/axrpodcast
Transcript: https://axrp.net/episode/2025/07/06/episode-45-samuel-albanie-deepminds-agi-safety-approach.html
Topics we discuss, and timestamps:
0:00:37 DeepMind's Approach to Technical AGI Safety and Security
0:04:29 Current paradigm continuation
0:19:13 No human ceiling
0:21:22 Uncertain timelines
0:23:36 Approximate continuity and the potential for accelerating capability improvement
0:34:29 Misuse and misalignment
0:39:34 Societal readiness
0:43:58 Misuse mitigations
0:52:57 Misalignment mitigations
1:05:20 Samuel's thinking about technical AGI safety
1:14:02 Following Samuel's work
Samuel on Twitter/X: x.com/samuelalbanie
Research we discuss:
An Approach to Technical AGI Safety and Security: https://arxiv.org/abs/2504.01849
Levels of AGI for Operationalizing Progress on the Path to AGI: https://arxiv.org/abs/2311.02462
The Checklist: What Succeeding at AI Safety Will Involve: https://sleepinyourhat.github.io/checklist/
Measuring AI Ability to Complete Long Tasks: https://arxiv.org/abs/2503.14499
Episode art by Hamish Doodles: hamishdoodles.com
4.4
88 ratings
In this episode, I chat with Samuel Albanie about the Google DeepMind paper he co-authored called "An Approach to Technical AGI Safety and Security". It covers the assumptions made by the approach, as well as the types of mitigations it outlines.
Patreon: https://www.patreon.com/axrpodcast
Ko-fi: https://ko-fi.com/axrpodcast
Transcript: https://axrp.net/episode/2025/07/06/episode-45-samuel-albanie-deepminds-agi-safety-approach.html
Topics we discuss, and timestamps:
0:00:37 DeepMind's Approach to Technical AGI Safety and Security
0:04:29 Current paradigm continuation
0:19:13 No human ceiling
0:21:22 Uncertain timelines
0:23:36 Approximate continuity and the potential for accelerating capability improvement
0:34:29 Misuse and misalignment
0:39:34 Societal readiness
0:43:58 Misuse mitigations
0:52:57 Misalignment mitigations
1:05:20 Samuel's thinking about technical AGI safety
1:14:02 Following Samuel's work
Samuel on Twitter/X: x.com/samuelalbanie
Research we discuss:
An Approach to Technical AGI Safety and Security: https://arxiv.org/abs/2504.01849
Levels of AGI for Operationalizing Progress on the Path to AGI: https://arxiv.org/abs/2311.02462
The Checklist: What Succeeding at AI Safety Will Involve: https://sleepinyourhat.github.io/checklist/
Measuring AI Ability to Complete Long Tasks: https://arxiv.org/abs/2503.14499
Episode art by Hamish Doodles: hamishdoodles.com
26,469 Listeners
2,395 Listeners
1,789 Listeners
298 Listeners
105 Listeners
4,145 Listeners
89 Listeners
287 Listeners
88 Listeners
426 Listeners
258 Listeners
75 Listeners
60 Listeners
146 Listeners
123 Listeners