
Sign up to save your podcasts
Or


Let's say you want to use AIs that are capable of causing x-risk. You have a plan that describes how to train, validate and deploy AIs. For example, you could use pretraining+RLHF, check that it doesn’t fall for honeypots and never inserts a single backdoor in your coding validation set, and have humans look at the deployment outputs GPT-4 finds the most suspicious. How do you know if this plan allows you to use these potentially dangerous AIs safely?
Evaluate safety using very good analogies for the real situation?
If you want to evaluate a protocol (a set of techniques you use to train, validate, and deploy a model), then you can see how well it works in a domain where you have held-out validation data that you can use to check if your protocol works. That [...]
---
Outline:
(00:40) Evaluate safety using very good analogies for the real situation?
(02:40) The string theory analogy
(04:12) Noticing rare failures
(04:55) Scheming AIs can game these sandwiching experiments
(05:44) Evaluating how robust protocols are against scheming AIs
(06:19) Malign initializations
(09:29) Control evaluations (Black-box)
(12:07) Aggregating non-adversarial and adversarial evaluations
(12:51) Theoretically-sound arguments about protocols and AIs
(13:50) Other non-adversarial evaluations
(16:44) Overview of evaluation methods
(20:02) Appendix: Analogies vs high-stakes reward hacking
The original text contained 8 footnotes which were omitted from this narration.
---
First published:
Source:
Narrated by TYPE III AUDIO.
By LessWrongLet's say you want to use AIs that are capable of causing x-risk. You have a plan that describes how to train, validate and deploy AIs. For example, you could use pretraining+RLHF, check that it doesn’t fall for honeypots and never inserts a single backdoor in your coding validation set, and have humans look at the deployment outputs GPT-4 finds the most suspicious. How do you know if this plan allows you to use these potentially dangerous AIs safely?
Evaluate safety using very good analogies for the real situation?
If you want to evaluate a protocol (a set of techniques you use to train, validate, and deploy a model), then you can see how well it works in a domain where you have held-out validation data that you can use to check if your protocol works. That [...]
---
Outline:
(00:40) Evaluate safety using very good analogies for the real situation?
(02:40) The string theory analogy
(04:12) Noticing rare failures
(04:55) Scheming AIs can game these sandwiching experiments
(05:44) Evaluating how robust protocols are against scheming AIs
(06:19) Malign initializations
(09:29) Control evaluations (Black-box)
(12:07) Aggregating non-adversarial and adversarial evaluations
(12:51) Theoretically-sound arguments about protocols and AIs
(13:50) Other non-adversarial evaluations
(16:44) Overview of evaluation methods
(20:02) Appendix: Analogies vs high-stakes reward hacking
The original text contained 8 footnotes which were omitted from this narration.
---
First published:
Source:
Narrated by TYPE III AUDIO.

113,164 Listeners

130 Listeners

7,255 Listeners

535 Listeners

16,266 Listeners

4 Listeners

14 Listeners

2 Listeners