
Sign up to save your podcasts
Or
This post reflects my personal opinion and not necessarily that of other members of Apollo Research or any of the people acknowledged below. Thanks to Jarrah Bloomfield, Lucius Bushnaq, Marius Hobbhahn, Axel Højmark, and Stefan Heimersheim for comments/discussions.
I find that people in the AI/AI safety community have not considered many of the important implications that security in AI companies has on catastrophic risks.
In this post, I’ve laid out some of these implications:
AI companies are a long way from state-proof security
I’m of course not the first one to make this claim (e.g. see Aschenbrenner). But it bears repeating.
Last year [...]
---
Outline:
(00:56) AI companies are a long way from state-proof security
(05:36) Implementing state-proof security will slow down safety (and capabilities) research a lot
(08:36) Sabotage is sufficient for catastrophe
(11:24) What will happen if timelines are short?
(14:34) Security level matters, even if you're not robust to top cyber operations
(15:21) Advice to frontier AI companies
The original text contained 1 footnote which was omitted from this narration.
---
First published:
Source:
Narrated by TYPE III AUDIO.
This post reflects my personal opinion and not necessarily that of other members of Apollo Research or any of the people acknowledged below. Thanks to Jarrah Bloomfield, Lucius Bushnaq, Marius Hobbhahn, Axel Højmark, and Stefan Heimersheim for comments/discussions.
I find that people in the AI/AI safety community have not considered many of the important implications that security in AI companies has on catastrophic risks.
In this post, I’ve laid out some of these implications:
AI companies are a long way from state-proof security
I’m of course not the first one to make this claim (e.g. see Aschenbrenner). But it bears repeating.
Last year [...]
---
Outline:
(00:56) AI companies are a long way from state-proof security
(05:36) Implementing state-proof security will slow down safety (and capabilities) research a lot
(08:36) Sabotage is sufficient for catastrophe
(11:24) What will happen if timelines are short?
(14:34) Security level matters, even if you're not robust to top cyber operations
(15:21) Advice to frontier AI companies
The original text contained 1 footnote which was omitted from this narration.
---
First published:
Source:
Narrated by TYPE III AUDIO.
26,322 Listeners
2,397 Listeners
7,949 Listeners
4,126 Listeners
87 Listeners
1,447 Listeners
8,772 Listeners
89 Listeners
354 Listeners
5,391 Listeners
15,312 Listeners
470 Listeners
124 Listeners
75 Listeners
445 Listeners