
Sign up to save your podcasts
Or


Epistemic status: raising concerns, rather than stating confident conclusions.
I’m worried that a lot of work on AI safety evals matches the pattern of “Something must be done. This is something. Therefore this must be done.” Or, to put it another way: I judge eval ideas on 4 criteria, and I often see proposals which fail all 4. The criteria:
1. Possible to measure with scientific rigor.
Some things can be easily studied in a lab; others are entangled with a lot of real-world complexity. If you predict the latter (e.g. a model's economic or scientific impact) based on model-level evals, your results will often be BS.
(This is why I dislike the term “transformative AI”, by the way. Whether an AI has transformative effects on society will depend hugely on what the society is like, how the AI is deployed, etc. And that's a [...]
---
First published:
Source:
Narrated by TYPE III AUDIO.
By LessWrongEpistemic status: raising concerns, rather than stating confident conclusions.
I’m worried that a lot of work on AI safety evals matches the pattern of “Something must be done. This is something. Therefore this must be done.” Or, to put it another way: I judge eval ideas on 4 criteria, and I often see proposals which fail all 4. The criteria:
1. Possible to measure with scientific rigor.
Some things can be easily studied in a lab; others are entangled with a lot of real-world complexity. If you predict the latter (e.g. a model's economic or scientific impact) based on model-level evals, your results will often be BS.
(This is why I dislike the term “transformative AI”, by the way. Whether an AI has transformative effects on society will depend hugely on what the society is like, how the AI is deployed, etc. And that's a [...]
---
First published:
Source:
Narrated by TYPE III AUDIO.

112,843 Listeners

130 Listeners

7,214 Listeners

531 Listeners

16,223 Listeners

4 Listeners

14 Listeners

2 Listeners