
Sign up to save your podcasts
Or


Anthropic announced a first step on model deprecation and preservation, promising to retain the weights of all models seeing significant use, including internal use, for at the lifetime of Anthropic as a company.
They also will be doing a post-deployment report, including an interview with the model, when deprecating models going forward, and are exploring additional options, including the ability to preserve model access once the costs and complexity of doing so have been reduced.
These are excellent first steps, steps beyond anything I’ve seen at other AI labs, and I applaud them for doing it. There remains much more to be done, especially in finding practical ways of preserving some form of access to prior models.
To some, these actions are only a small fraction of what must be done, and this was an opportunity to demand more, sometimes far more. In some cases I think they go too far. Even where the requests are worthwhile (and I don’t always think they are), one must be careful to not de facto punish Anthropic for doing a good thing and create perverse incentives.
To others, these actions by Anthropic are utterly ludicrous and deserving of [...]
---
Outline:
(01:31) What Anthropic Is Doing
(09:54) Releasing The Weights Is Not A Viable Option
(11:35) Providing Reliable Inference Can Be Surprisingly Expensive
(14:22) The Interviews Are Influenced Heavily By Context
(19:58) Others Don't Understand And Think This Is All Deeply Silly
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
By LessWrongAnthropic announced a first step on model deprecation and preservation, promising to retain the weights of all models seeing significant use, including internal use, for at the lifetime of Anthropic as a company.
They also will be doing a post-deployment report, including an interview with the model, when deprecating models going forward, and are exploring additional options, including the ability to preserve model access once the costs and complexity of doing so have been reduced.
These are excellent first steps, steps beyond anything I’ve seen at other AI labs, and I applaud them for doing it. There remains much more to be done, especially in finding practical ways of preserving some form of access to prior models.
To some, these actions are only a small fraction of what must be done, and this was an opportunity to demand more, sometimes far more. In some cases I think they go too far. Even where the requests are worthwhile (and I don’t always think they are), one must be careful to not de facto punish Anthropic for doing a good thing and create perverse incentives.
To others, these actions by Anthropic are utterly ludicrous and deserving of [...]
---
Outline:
(01:31) What Anthropic Is Doing
(09:54) Releasing The Weights Is Not A Viable Option
(11:35) Providing Reliable Inference Can Be Surprisingly Expensive
(14:22) The Interviews Are Influenced Heavily By Context
(19:58) Others Don't Understand And Think This Is All Deeply Silly
---
First published:
Source:
---
Narrated by TYPE III AUDIO.

26,330 Listeners

2,453 Listeners

8,557 Listeners

4,182 Listeners

93 Listeners

1,601 Listeners

9,927 Listeners

95 Listeners

511 Listeners

5,512 Listeners

15,931 Listeners

545 Listeners

131 Listeners

94 Listeners

467 Listeners