
Sign up to save your podcasts
Or


YouTube link
In this episode, I chat with Samuel Albanie about the Google DeepMind paper he co-authored called “An Approach to Technical AGI Safety and Security”. It covers the assumptions made by the approach, as well as the types of mitigations it outlines.
Topics we discuss:
Daniel Filan (00:00:09):
---
Outline:
(01:42) DeepMind's Approach to Technical AGI Safety and Security
(05:51) Current paradigm continuation
(20:30) No human ceiling
(22:38) Uncertain timelines
(24:47) Approximate continuity and the potential for accelerating capability improvement
(34:48) Misuse and misalignment
(39:26) Societal readiness
(43:25) Misuse mitigations
(52:00) Misalignment mitigations
(01:05:16) Samuel's thinking about technical AGI safety
(01:14:31) Following Samuel's work
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
By LessWrongYouTube link
In this episode, I chat with Samuel Albanie about the Google DeepMind paper he co-authored called “An Approach to Technical AGI Safety and Security”. It covers the assumptions made by the approach, as well as the types of mitigations it outlines.
Topics we discuss:
Daniel Filan (00:00:09):
---
Outline:
(01:42) DeepMind's Approach to Technical AGI Safety and Security
(05:51) Current paradigm continuation
(20:30) No human ceiling
(22:38) Uncertain timelines
(24:47) Approximate continuity and the potential for accelerating capability improvement
(34:48) Misuse and misalignment
(39:26) Societal readiness
(43:25) Misuse mitigations
(52:00) Misalignment mitigations
(01:05:16) Samuel's thinking about technical AGI safety
(01:14:31) Following Samuel's work
---
First published:
Source:
---
Narrated by TYPE III AUDIO.

113,290 Listeners

132 Listeners

7,259 Listeners

566 Listeners

16,498 Listeners

4 Listeners

14 Listeners

2 Listeners