Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Douglas Hofstadter changes his mind on Deep Learning & AI risk (June 2023)?, published by gwern on July 3, 2023 on LessWrong.
A podcast interview (posted 2023-06-29) with noted AI researcher Douglas Hofstadter discusses his career and current views on AI.
Hofstadter has previously energetically criticized GPT-2/3 models (and deep learning and compute-heavy GOFAI). These criticisms were widely circulated & cited, and apparently many people found Hofstadter a convincing & trustworthy authority when he was negative on deep learning capabilities & prospects, and so I found his comments in this most recent discussion of considerable interest (via Edward Kmett).
Below I excerpt from the second half where he discusses DL progress & AI risk:
Q: ...Which ideas from
GEB are most relevant today?
Hofstadter: ...In my book, I Am a
Strange
Loop, I tried to set forth what it is that really makes a self or a soul. I like to use the word
"soul", not in the religious sense, but as a synonym for "I", a human "I", capital letter "I." So, what is it that makes a human being able to validly say "I"? What justifies the use of that word? When can a computer say "I" and we feel that there is a genuine "I" behind the scenes?
I don't mean like when you call up the drugstore and the chatbot, or whatever you want to call it, on the phone says, "Tell me what you want. I know you want to talk to a human being, but first, in a few words, tell me what you want. I can understand full sentences." And then you say something and it says, "Do you want to refill a prescription?" And then when I say yes, it says, "Gotcha", meaning "I got you." So it acts as if there is an "I" there, but I don't have any sense whatsoever that there is an "I" there. It doesn't feel like an "I" to me, it feels like a very mechanical process.
But in the case of more advanced things like
ChatGPT-3 or
GPT-4, it feels like there is something more there that merits the word "I." The question is, when will we feel that those things actually deserve to be thought of as being full-fledged, or at least partly fledged, "I"s?
I personally worry that this is happening right now. But it's not only happening right now. It's not just that certain things that are coming about are similar to human consciousness or human selves. They are also very different, and in one way, it is extremely frightening to me. They are extraordinarily much more knowledgeable and they are extraordinarily much faster. So that if I were to take an hour in doing something, the ChatGPT-4 might take one second, maybe not even a second, to do exactly the same thing.
And that suggests that these entities, whatever you want to think of them, are going to be very soon, right now they still make so many mistakes that we can't call them more intelligent than us, but very soon they're going to be, they may very well be more intelligent than us and far more intelligent than us. And at that point, we will be receding into the background in some sense. We will have handed the baton over to our successors, for better or for worse.
And I can understand that if this were to happen over a long period of time, like hundreds of years, that might be okay. But it's happening over a period of a few years. It's like a tidal wave that is washing over us at unprecedented and unimagined speeds. And to me, it's quite terrifying because it suggests that everything that I used to believe was the case is being overturned.
Q: What are some things specifically that terrify you? What are some issues that you're really...
D. Hofstadter: When I started out studying cognitive science and thinking about the mind and computation, you know, this was many years ago, around 1960, and I knew how computers worked and
I knew how extraordinarily rigid they were. You made the slightest typing error and it comp...