
Sign up to save your podcasts
Or


Associated with AI Lab Watch, I sent questionnaires to some labs a week ago (except I failed to reach Microsoft). I didn't really get any replies (one person replied in their personal capacity; this was very limited and they didn't answer any questions). Here are most of those questions, with slight edits since I shared them with the labs + similar questions condensed into the last two sections.
Lots of my questions are normal I didn't find public info on this safety practice and I think you should explain questions. Some are more like it's pretty uncool that I can't find the answer to this — like: breaking commitments, breaking not-quite-commitments and not explaining, having ambiguity around commitments, and taking credit for stuff[1] when it's very unclear that you should get credit are pretty uncool.
Anthropic[2]
Internal governance stuff (I'm personally particularly interested in these questions—I think Anthropic has [...]
---
Outline:
(00:52) Anthropic
(05:35) OpenAI
(09:28) DeepMind
(11:59) Microsoft
(13:31) Meta
(13:52) OpenAI-Microsoft relationship
(14:12) Everyone (or, like, at least three of them) \[less important\]
The original text contained 3 footnotes which were omitted from this narration.
---
First published:
Source:
Narrated by TYPE III AUDIO.
By LessWrongAssociated with AI Lab Watch, I sent questionnaires to some labs a week ago (except I failed to reach Microsoft). I didn't really get any replies (one person replied in their personal capacity; this was very limited and they didn't answer any questions). Here are most of those questions, with slight edits since I shared them with the labs + similar questions condensed into the last two sections.
Lots of my questions are normal I didn't find public info on this safety practice and I think you should explain questions. Some are more like it's pretty uncool that I can't find the answer to this — like: breaking commitments, breaking not-quite-commitments and not explaining, having ambiguity around commitments, and taking credit for stuff[1] when it's very unclear that you should get credit are pretty uncool.
Anthropic[2]
Internal governance stuff (I'm personally particularly interested in these questions—I think Anthropic has [...]
---
Outline:
(00:52) Anthropic
(05:35) OpenAI
(09:28) DeepMind
(11:59) Microsoft
(13:31) Meta
(13:52) OpenAI-Microsoft relationship
(14:12) Everyone (or, like, at least three of them) \[less important\]
The original text contained 3 footnotes which were omitted from this narration.
---
First published:
Source:
Narrated by TYPE III AUDIO.

112,952 Listeners

130 Listeners

7,230 Listeners

535 Listeners

16,199 Listeners

4 Listeners

14 Listeners

2 Listeners