
Sign up to save your podcasts
Or


Many people who are concerned about AI x-risk work at AI labs, in the hope of doing directly useful work, boosting a relatively responsible lab, or causing their lab to be safer on the margin.
Labs do lots of stuff that affects AI safety one way or another. It would be hard enough to follow all this at best; in practice, labs are incentivized to be misleading in both their public and internal comms, making it even harder to follow what's happening. And so people end up misinformed about what's happening, often leading them to make suboptimal choices.
In my AI Lab Watch work, I pay attention to what AI labs do and what they should do. So I'm in a good position to inform interested but busy people.
So I'm announcing an experimental service where I provide the following:
---
First published:
Source:
Narrated by TYPE III AUDIO.
By LessWrongMany people who are concerned about AI x-risk work at AI labs, in the hope of doing directly useful work, boosting a relatively responsible lab, or causing their lab to be safer on the margin.
Labs do lots of stuff that affects AI safety one way or another. It would be hard enough to follow all this at best; in practice, labs are incentivized to be misleading in both their public and internal comms, making it even harder to follow what's happening. And so people end up misinformed about what's happening, often leading them to make suboptimal choices.
In my AI Lab Watch work, I pay attention to what AI labs do and what they should do. So I'm in a good position to inform interested but busy people.
So I'm announcing an experimental service where I provide the following:
---
First published:
Source:
Narrated by TYPE III AUDIO.

113,488 Listeners

132 Listeners

7,247 Listeners

562 Listeners

16,493 Listeners

4 Listeners

14 Listeners

2 Listeners