
Sign up to save your podcasts
Or


Two weeks ago, xAI finally published its Risk Management Framework and first model card. Unfortunately, the RMF effects very little risk reduction and suggests that xAI isn't thinking seriously about catastrophic risks. (The model card and strategy for preventing misuse are disappointing but much less important because they're mostly just relevant to a fraction of misuse risks.)
On misalignment, "Our risk acceptance criteria for system deployment is maintaining a dishonesty rate of less than 1 out of 2 on MASK. We plan to add additional thresholds tied to other benchmarks." MASK has almost nothing to do with catastrophic misalignment risk, and upfront benchmarking is not a good approach to misalignment risk. On security, "xAI has implemented appropriate information security standards sufficient to prevent its critical model information from being stolen by a motivated non-state actor." This is not credible, xAI doesn't justify it, and xAI doesn't mention future security [...]
---
Outline:
(01:24) Misalignment
(04:33) Security
(05:17) Misc/context
(06:02) Conclusion
(07:01) Appendix: Misuse (via API) \[less important\]
(08:07) Mitigations
(10:41) Evals
The original text contained 12 footnotes which were omitted from this narration.
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
By LessWrongTwo weeks ago, xAI finally published its Risk Management Framework and first model card. Unfortunately, the RMF effects very little risk reduction and suggests that xAI isn't thinking seriously about catastrophic risks. (The model card and strategy for preventing misuse are disappointing but much less important because they're mostly just relevant to a fraction of misuse risks.)
On misalignment, "Our risk acceptance criteria for system deployment is maintaining a dishonesty rate of less than 1 out of 2 on MASK. We plan to add additional thresholds tied to other benchmarks." MASK has almost nothing to do with catastrophic misalignment risk, and upfront benchmarking is not a good approach to misalignment risk. On security, "xAI has implemented appropriate information security standards sufficient to prevent its critical model information from being stolen by a motivated non-state actor." This is not credible, xAI doesn't justify it, and xAI doesn't mention future security [...]
---
Outline:
(01:24) Misalignment
(04:33) Security
(05:17) Misc/context
(06:02) Conclusion
(07:01) Appendix: Misuse (via API) \[less important\]
(08:07) Mitigations
(10:41) Evals
The original text contained 12 footnotes which were omitted from this narration.
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

26,388 Listeners

2,424 Listeners

8,267 Listeners

4,145 Listeners

92 Listeners

1,580 Listeners

9,828 Listeners

89 Listeners

488 Listeners

5,475 Listeners

16,083 Listeners

534 Listeners

133 Listeners

96 Listeners

509 Listeners