
Sign up to save your podcasts
Or


All views are my own, not Anthropic's. This post assumes Anthropic's announcement of RSP v3.0 as background.
Today, Anthropic released its Responsible Scaling Policy 3.0. The official announcement discusses the high-level thinking behind it. This is a more detailed post giving my own takes on the update.
First, the big picture:
---
Outline:
(05:32) How it started: the original goals of RSPs
(11:25) How its going: the good and the bad
(11:51) A note on my general orientation toward this topic
(14:56) Goal 1: forcing functions for improved risk mitigations
(15:02) A partial success story: robustness to jailbreaks for particular uses of concern, in line with the ASL-3 deployment standard
(18:24) A mixed success/failure story: impact on information security
(20:42) ASL-4 and ASL-5 prep: the wrong incentives
(25:00) When forcing functions do and dont work well
(27:52) Goal 2 (testbed for practices and policies that can feed into regulation)
(29:24) Goal 3 (working toward consensus and common knowledge about AI risks and potential mitigations)
(30:59) RSP v3s attempt to amplify the good and reduce the bad
(36:01) Do these benefits apply only to the most safety-oriented companies?
(37:40) A revised, but not overturned, vision for RSPs
(39:08) Q&A
(39:10) On the move away from implied unilateral commitments
(39:15) Is RSP v3 proactively sending a race-to-the-bottom signal? Why be the first company to explicitly abandon the high ambition for achieving low levels of risk?
(40:34) How sure are you that a voluntary industry-wide pause cant happen? Are you worried about signaling that youll be the first to defect in a prisoners dilemma?
(42:03) How sure are you that you cant actually sprint to achieve the level of information security, alignment science understanding, and deployment safeguards needed to make arbitrarily powerful AI systems low-risk?
(43:49) What message will this change send to regulators? Will it make ambitious regulation less likely by making companies commitments to low risk look less serious?
(45:10) Why did you have to do this now - couldnt you have waited until the last possible moment to make this change, in case the more ambitious risk mitigations ended up working out?
(46:03) Could you have drafted the new RSP, then waited until you had to invoke your escape clause and introduced it then? Or introduced the new RSP as what we will do if we invoke our escape clause?
(47:29) The new Risk Reports and Roadmap are nice, but couldnt you have put them out without also making the key revision of moving away from unilateral commitments?
(48:26) Why isnt a unilateral pause a good idea? It could be a big credible signal of danger, which could lead to policy action.
(49:37) Could a unilateral pause ever be a good idea? Why not commit to a unilateral pause in cases where it would be a good idea?
(50:31) Why didnt you communicate about the change differently? Im worried that the way you framed this will cause audience X to take away message Y.
(51:53) Why dont Anthropics and your communications about this have a more alarmed and/or disappointed vibe? I reluctantly concede that this revision makes sense on the merits, but Im sad about it. Arent you?
(53:19) On other components of the new RSP
(53:24) The new RSPs commitments related to competitors seem vague and weak. Could you add more and/or strengthen these? They dont seem sufficient as-is to provide strong assurance against a prisoners dilemma world where each relevant company wishes it could be more careful, but rushes due to pressure from others.
(55:29) Why is external review only required at an extreme capability level? Why not just require it now?
(58:06) The new commitments are mostly about Risk Reports and Roadmap - what stops companies from just making these really perfunctory?
(59:18) Why isnt the RSP more adversarially designed such that once a company adopts it, it will improve their practices even if nobody at the company values safety at all?
(01:00:18) What are the consequences of missing your Roadmap commitments? If they arent dire, will anyone care about them?
(01:00:29) OK, but does that apply to other companies too? How will Roadmaps force other companies to get things done?
(01:00:40) Why arent the recommendations for industry-wide safety more specific? Why is it built around safety cases instead of ASLs with specific lists of needed risk mitigations?
(01:02:06) What is the point of making commitments if you can revise them anytime?
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
By EA Forum TeamAll views are my own, not Anthropic's. This post assumes Anthropic's announcement of RSP v3.0 as background.
Today, Anthropic released its Responsible Scaling Policy 3.0. The official announcement discusses the high-level thinking behind it. This is a more detailed post giving my own takes on the update.
First, the big picture:
---
Outline:
(05:32) How it started: the original goals of RSPs
(11:25) How its going: the good and the bad
(11:51) A note on my general orientation toward this topic
(14:56) Goal 1: forcing functions for improved risk mitigations
(15:02) A partial success story: robustness to jailbreaks for particular uses of concern, in line with the ASL-3 deployment standard
(18:24) A mixed success/failure story: impact on information security
(20:42) ASL-4 and ASL-5 prep: the wrong incentives
(25:00) When forcing functions do and dont work well
(27:52) Goal 2 (testbed for practices and policies that can feed into regulation)
(29:24) Goal 3 (working toward consensus and common knowledge about AI risks and potential mitigations)
(30:59) RSP v3s attempt to amplify the good and reduce the bad
(36:01) Do these benefits apply only to the most safety-oriented companies?
(37:40) A revised, but not overturned, vision for RSPs
(39:08) Q&A
(39:10) On the move away from implied unilateral commitments
(39:15) Is RSP v3 proactively sending a race-to-the-bottom signal? Why be the first company to explicitly abandon the high ambition for achieving low levels of risk?
(40:34) How sure are you that a voluntary industry-wide pause cant happen? Are you worried about signaling that youll be the first to defect in a prisoners dilemma?
(42:03) How sure are you that you cant actually sprint to achieve the level of information security, alignment science understanding, and deployment safeguards needed to make arbitrarily powerful AI systems low-risk?
(43:49) What message will this change send to regulators? Will it make ambitious regulation less likely by making companies commitments to low risk look less serious?
(45:10) Why did you have to do this now - couldnt you have waited until the last possible moment to make this change, in case the more ambitious risk mitigations ended up working out?
(46:03) Could you have drafted the new RSP, then waited until you had to invoke your escape clause and introduced it then? Or introduced the new RSP as what we will do if we invoke our escape clause?
(47:29) The new Risk Reports and Roadmap are nice, but couldnt you have put them out without also making the key revision of moving away from unilateral commitments?
(48:26) Why isnt a unilateral pause a good idea? It could be a big credible signal of danger, which could lead to policy action.
(49:37) Could a unilateral pause ever be a good idea? Why not commit to a unilateral pause in cases where it would be a good idea?
(50:31) Why didnt you communicate about the change differently? Im worried that the way you framed this will cause audience X to take away message Y.
(51:53) Why dont Anthropics and your communications about this have a more alarmed and/or disappointed vibe? I reluctantly concede that this revision makes sense on the merits, but Im sad about it. Arent you?
(53:19) On other components of the new RSP
(53:24) The new RSPs commitments related to competitors seem vague and weak. Could you add more and/or strengthen these? They dont seem sufficient as-is to provide strong assurance against a prisoners dilemma world where each relevant company wishes it could be more careful, but rushes due to pressure from others.
(55:29) Why is external review only required at an extreme capability level? Why not just require it now?
(58:06) The new commitments are mostly about Risk Reports and Roadmap - what stops companies from just making these really perfunctory?
(59:18) Why isnt the RSP more adversarially designed such that once a company adopts it, it will improve their practices even if nobody at the company values safety at all?
(01:00:18) What are the consequences of missing your Roadmap commitments? If they arent dire, will anyone care about them?
(01:00:29) OK, but does that apply to other companies too? How will Roadmaps force other companies to get things done?
(01:00:40) Why arent the recommendations for industry-wide safety more specific? Why is it built around safety cases instead of ASLs with specific lists of needed risk mitigations?
(01:02:06) What is the point of making commitments if you can revise them anytime?
---
First published:
Source:
---
Narrated by TYPE III AUDIO.