Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Transcript of Sam Altman's interview touching on AI safety, published by Andy McKenzie on January 20, 2023 on LessWrong.
Sam Altman, CEO of OpenAI, was interviewed by Connie Loizos last week and the video was posted two days ago. Here are some AI safety-relevant parts of the discussion, with light editing by me for clarity, based on this automated transcript:
[starting in part two of the interview, which is where the discussion about AI safety is]
Connie: So moving on to AI which is where you've obviously spent the bulk of your time since I saw you when we sat here three years ago. You were telling us what was coming and we all thought you were being sort of hyperbolic and you were dead serious. Why do you think that ChatGPT and DALL-E so surprised people?
Sam: I genuinely don't know. I've reflected on it a lot. We had the model for ChatGPT in the API for I don't know 10 months or something before we made ChatGPT. And I sort of thought someone was going to just build it or whatever and that enough people had played around with it. Definitely, if you make a really good user experience on top of something. One thing that I very deeply believed was the way people wanted to interact with these models was via dialogue. We kept telling people this we kept trying to get people to build it and people wouldn't quite do it. So we finally said all right we're just going to do it, but yeah I think the pieces were there for a while.
One of the reasons I think DALL-E surprised people is if you asked five or seven years ago, the kind of ironclad wisdom on AI was that first, it comes for physical labor, truck driving, working in the factory, then this sort of less demanding cognitive labor, then the really demanding cognitive labor like computer programming, and then very last of all or maybe never because maybe it's like some deep human special sauce was creativity. And of course, we can look now and say it really looks like it's going to go exactly the opposite direction. But I think that is not super intuitive and so I can see why DALL-E surprised people. But I genuinely felt somewhat confused about why ChatGPT did.
One of the things we really believe is that the most responsible way to put this out in society is very gradually and to get people, institutions, policy makers, get them familiar with it, thinking about the implications, feeling the technology, and getting a sense for what it can do and can't do very early. Rather than drop a super powerful AGI in the world all at once. And so we put GPT3 out almost three years ago and then we put it into an API like two and a half years ago. And the incremental update from that to ChatGPT I felt should have been predictable and I want to do more introspection on why I was sort of miscalibrated on that.
Connie: So you know you had talked when you were here about releasing things in a responsible way. What gave you the confidence to release what you have released already? I mean do you think we're ready for it? Are there enough guardrails in place?
Sam: We do have an internal process where we try to break things in and study impacts. We use external auditors, we have external red teamers, we work with other labs, and have safety organizations look at stuff.
Societal changes that ChatGPT is going to cause or is causing. There's a big one going now about the impact of this on education, academic integrity, all of that. But starting these now where the stakes are still relatively low, rather than just putting out what the whole industry will have in a few years with no time for society to update, I think would be bad. Covid did show us for better or for worse that society can update to massive changes sort of faster than I would have thought in many ways.
But I still think given the magnitude of the economic impact we expect here more gr...