
Sign up to save your podcasts
Or
Let's say you want to use AIs that are capable of causing x-risk. You have a plan that describes how to train, validate and deploy AIs. For example, you could use pretraining+RLHF, check that it doesn’t fall for honeypots and never inserts a single backdoor in your coding validation set, and have humans look at the deployment outputs GPT-4 finds the most suspicious. How do you know if this plan allows you to use these potentially dangerous AIs safely?
Evaluate safety using very good analogies for the real situation?
If you want to evaluate a protocol (a set of techniques you use to train, validate, and deploy a model), then you can see how well it works in a domain where you have held-out validation data that you can use to check if your protocol works. That [...]
---
Outline:
(00:40) Evaluate safety using very good analogies for the real situation?
(02:40) The string theory analogy
(04:12) Noticing rare failures
(04:55) Scheming AIs can game these sandwiching experiments
(05:44) Evaluating how robust protocols are against scheming AIs
(06:19) Malign initializations
(09:29) Control evaluations (Black-box)
(12:07) Aggregating non-adversarial and adversarial evaluations
(12:51) Theoretically-sound arguments about protocols and AIs
(13:50) Other non-adversarial evaluations
(16:44) Overview of evaluation methods
(20:02) Appendix: Analogies vs high-stakes reward hacking
The original text contained 8 footnotes which were omitted from this narration.
---
First published:
Source:
Narrated by TYPE III AUDIO.
Let's say you want to use AIs that are capable of causing x-risk. You have a plan that describes how to train, validate and deploy AIs. For example, you could use pretraining+RLHF, check that it doesn’t fall for honeypots and never inserts a single backdoor in your coding validation set, and have humans look at the deployment outputs GPT-4 finds the most suspicious. How do you know if this plan allows you to use these potentially dangerous AIs safely?
Evaluate safety using very good analogies for the real situation?
If you want to evaluate a protocol (a set of techniques you use to train, validate, and deploy a model), then you can see how well it works in a domain where you have held-out validation data that you can use to check if your protocol works. That [...]
---
Outline:
(00:40) Evaluate safety using very good analogies for the real situation?
(02:40) The string theory analogy
(04:12) Noticing rare failures
(04:55) Scheming AIs can game these sandwiching experiments
(05:44) Evaluating how robust protocols are against scheming AIs
(06:19) Malign initializations
(09:29) Control evaluations (Black-box)
(12:07) Aggregating non-adversarial and adversarial evaluations
(12:51) Theoretically-sound arguments about protocols and AIs
(13:50) Other non-adversarial evaluations
(16:44) Overview of evaluation methods
(20:02) Appendix: Analogies vs high-stakes reward hacking
The original text contained 8 footnotes which were omitted from this narration.
---
First published:
Source:
Narrated by TYPE III AUDIO.
26,462 Listeners
2,395 Listeners
7,928 Listeners
4,142 Listeners
89 Listeners
1,472 Listeners
9,189 Listeners
88 Listeners
417 Listeners
5,448 Listeners
15,237 Listeners
481 Listeners
121 Listeners
75 Listeners
461 Listeners