
Sign up to save your podcasts
Or


Two weeks ago, xAI finally published its Risk Management Framework and first model card. Unfortunately, the RMF effects very little risk reduction and suggests that xAI isn't thinking seriously about catastrophic risks. (The model card and strategy for preventing misuse are disappointing but much less important because they're mostly just relevant to a fraction of misuse risks.)
On misalignment, "Our risk acceptance criteria for system deployment is maintaining a dishonesty rate of less than 1 out of 2 on MASK. We plan to add additional thresholds tied to other benchmarks." MASK has almost nothing to do with catastrophic misalignment risk, and upfront benchmarking is not a good approach to misalignment risk. On security, "xAI has implemented appropriate information security standards sufficient to prevent its critical model information from being stolen by a motivated non-state actor." This is not credible, xAI doesn't justify it, and xAI doesn't mention future security [...]
---
Outline:
(01:24) Misalignment
(04:33) Security
(05:17) Misc/context
(06:02) Conclusion
(07:01) Appendix: Misuse (via API) \[less important\]
(08:07) Mitigations
(10:41) Evals
The original text contained 12 footnotes which were omitted from this narration.
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
By LessWrongTwo weeks ago, xAI finally published its Risk Management Framework and first model card. Unfortunately, the RMF effects very little risk reduction and suggests that xAI isn't thinking seriously about catastrophic risks. (The model card and strategy for preventing misuse are disappointing but much less important because they're mostly just relevant to a fraction of misuse risks.)
On misalignment, "Our risk acceptance criteria for system deployment is maintaining a dishonesty rate of less than 1 out of 2 on MASK. We plan to add additional thresholds tied to other benchmarks." MASK has almost nothing to do with catastrophic misalignment risk, and upfront benchmarking is not a good approach to misalignment risk. On security, "xAI has implemented appropriate information security standards sufficient to prevent its critical model information from being stolen by a motivated non-state actor." This is not credible, xAI doesn't justify it, and xAI doesn't mention future security [...]
---
Outline:
(01:24) Misalignment
(04:33) Security
(05:17) Misc/context
(06:02) Conclusion
(07:01) Appendix: Misuse (via API) \[less important\]
(08:07) Mitigations
(10:41) Evals
The original text contained 12 footnotes which were omitted from this narration.
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

26,319 Listeners

2,452 Listeners

8,521 Listeners

4,175 Listeners

93 Listeners

1,602 Listeners

9,938 Listeners

96 Listeners

517 Listeners

5,509 Listeners

15,892 Listeners

553 Listeners

131 Listeners

93 Listeners

465 Listeners