
Sign up to save your podcasts
Or
Associated with AI Lab Watch, I sent questionnaires to some labs a week ago (except I failed to reach Microsoft). I didn't really get any replies (one person replied in their personal capacity; this was very limited and they didn't answer any questions). Here are most of those questions, with slight edits since I shared them with the labs + similar questions condensed into the last two sections.
Lots of my questions are normal I didn't find public info on this safety practice and I think you should explain questions. Some are more like it's pretty uncool that I can't find the answer to this — like: breaking commitments, breaking not-quite-commitments and not explaining, having ambiguity around commitments, and taking credit for stuff[1] when it's very unclear that you should get credit are pretty uncool.
Anthropic[2]
Internal governance stuff (I'm personally particularly interested in these questions—I think Anthropic has [...]
---
Outline:
(00:52) Anthropic
(05:35) OpenAI
(09:28) DeepMind
(11:59) Microsoft
(13:31) Meta
(13:52) OpenAI-Microsoft relationship
(14:12) Everyone (or, like, at least three of them) \[less important\]
The original text contained 3 footnotes which were omitted from this narration.
---
First published:
Source:
Narrated by TYPE III AUDIO.
Associated with AI Lab Watch, I sent questionnaires to some labs a week ago (except I failed to reach Microsoft). I didn't really get any replies (one person replied in their personal capacity; this was very limited and they didn't answer any questions). Here are most of those questions, with slight edits since I shared them with the labs + similar questions condensed into the last two sections.
Lots of my questions are normal I didn't find public info on this safety practice and I think you should explain questions. Some are more like it's pretty uncool that I can't find the answer to this — like: breaking commitments, breaking not-quite-commitments and not explaining, having ambiguity around commitments, and taking credit for stuff[1] when it's very unclear that you should get credit are pretty uncool.
Anthropic[2]
Internal governance stuff (I'm personally particularly interested in these questions—I think Anthropic has [...]
---
Outline:
(00:52) Anthropic
(05:35) OpenAI
(09:28) DeepMind
(11:59) Microsoft
(13:31) Meta
(13:52) OpenAI-Microsoft relationship
(14:12) Everyone (or, like, at least three of them) \[less important\]
The original text contained 3 footnotes which were omitted from this narration.
---
First published:
Source:
Narrated by TYPE III AUDIO.
26,446 Listeners
2,389 Listeners
7,910 Listeners
4,136 Listeners
87 Listeners
1,462 Listeners
9,095 Listeners
87 Listeners
389 Listeners
5,432 Listeners
15,174 Listeners
474 Listeners
121 Listeners
75 Listeners
459 Listeners