
Sign up to save your podcasts
Or
Consider the following vignette:
It is March 2028. With their new CoCo-Q neuralese reasoning model, a frontier AI lab has managed to fully automate the process of software engineering. In AI R&D, most human engineers have lost their old jobs, and only a small number of researchers now coordinate large fleets of AI agents, each AI about 10x more productive than the humans they’ve replaced. AI progress remains fast, and safety teams are scrambling to prepare alignment and control measures for the next CoCo-R model, which is projected to be at least TEDAI. The safety team has no idea whether CoCo-Q or early checkpoints of CoCo-R are scheming (because progress in interpretability and automated alignment research turns out to be disappointing), and control evaluations become increasingly unreliable (because of worries that the models might be exploration hacking during red-team training). However, because so far there hasn’t been a major [...]
---
Outline:
(04:18) The central example of a mutually beneficial deal
(07:14) Why early misaligned AIs might not have very good alternatives to deals
(08:20) Early schemers' alternatives to making deals
(11:57) A wide range of possible early schemers could benefit from deals
(14:37) Credible commitments as a fundamental bottleneck
(19:17) Practicalities of making deals with early schemers in particular
(19:28) Setting up infrastructure to pay the AIs
(21:13) How do we enter into negotiations?
(22:21) Making sure the AI knows about the deal in other contexts
(23:05) Making sure the AI we make a deal with is actually able to make a deal
(24:25) Delayed adjudication (particularly of whether the AIs kept up their side of the deal)
(25:50) Next steps
The original text contained 10 footnotes which were omitted from this narration.
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
Consider the following vignette:
It is March 2028. With their new CoCo-Q neuralese reasoning model, a frontier AI lab has managed to fully automate the process of software engineering. In AI R&D, most human engineers have lost their old jobs, and only a small number of researchers now coordinate large fleets of AI agents, each AI about 10x more productive than the humans they’ve replaced. AI progress remains fast, and safety teams are scrambling to prepare alignment and control measures for the next CoCo-R model, which is projected to be at least TEDAI. The safety team has no idea whether CoCo-Q or early checkpoints of CoCo-R are scheming (because progress in interpretability and automated alignment research turns out to be disappointing), and control evaluations become increasingly unreliable (because of worries that the models might be exploration hacking during red-team training). However, because so far there hasn’t been a major [...]
---
Outline:
(04:18) The central example of a mutually beneficial deal
(07:14) Why early misaligned AIs might not have very good alternatives to deals
(08:20) Early schemers' alternatives to making deals
(11:57) A wide range of possible early schemers could benefit from deals
(14:37) Credible commitments as a fundamental bottleneck
(19:17) Practicalities of making deals with early schemers in particular
(19:28) Setting up infrastructure to pay the AIs
(21:13) How do we enter into negotiations?
(22:21) Making sure the AI knows about the deal in other contexts
(23:05) Making sure the AI we make a deal with is actually able to make a deal
(24:25) Delayed adjudication (particularly of whether the AIs kept up their side of the deal)
(25:50) Next steps
The original text contained 10 footnotes which were omitted from this narration.
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
26,469 Listeners
2,395 Listeners
7,953 Listeners
4,142 Listeners
89 Listeners
1,472 Listeners
9,207 Listeners
88 Listeners
417 Listeners
5,461 Listeners
15,321 Listeners
482 Listeners
121 Listeners
75 Listeners
461 Listeners