Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Transcript: Yudkowsky on Bankless follow-up Q&A, published by vonk on February 28, 2023 on LessWrong.
This follow-up Q&A took place shortly after the podcast was released. It talks more about the hosts' reactions to the episode, clears some questions about AI takeover pathways & alignment difficulties (like "why can't we just ask AIs to help solve the alignment?"); OpenAI/Silicon Valley & what should these companies be doing instead; Eliezer's take on doomerism; what would a surviving distant future look like. Let me know if you can clear up some of the [??] places (here is the original transcript alongside audio).michaelwong.eth: Good afternoon. Good morning, wherever you are. Got to be one of those, I bet. It's another Monday, minting Monday with Bankless. So I hope that you guys got to listen to the episode this morning about AI. And I have a hard time pronouncing this gentleman's name, but I think it's Eliezer. So I got Lucas on the call. I got Ryan on the call. I got David on the call. What's up everybody?
David Hoffman: Yo, yo, how are you feeling? Alright?
Ryan Sean Adams: Hey, still live. How you feeling? David?
David: [laugs] Pretty good. Pretty good. Just you know, everyday trying to push existential dread to the background.
Ryan: Yeah, me too. Especially since last Monday when we recorded this thing. Mike, Lucas, How're you guys doing?
0x_Lucas: Doing pretty good. Also kind of going through my own mini existential crisis right now and just trying to survive. One day at a time.
michaelwong.eth: I'm living large. I didn't know that the Mr. Roboto part of that song was so late in the in the song. So thanks, everybody for sticking with me through that. But it's kind of relevant today. A little bit relevant. So we're gonna jump into that in just a moment.
Ryan: Guys, let's can we get into logistics first. So what are we doing here today, Lucas and Mike?
0x_Lucas: Yeah, absolutely. So we are on our Monday mint number six of the year. So for those familiar, we mint our flagship Bankless podcast every Monday as a limited edition collectible on sound protocol. So you can go ahead and mint these at collectibles.bankless.com and part of it we like to host these little live Twitter spaces just so everyone has a live [??]. Ryan and David'd love to kind of do a debrief on the episode. And hopefully we have Eliezer joining us and I'd also probably butchered his name, but yeah, hopefully he is able to join us in the next few minutes here, but overall, just wanted to debrief on the episode talk about the men. And yeah, get your guys's thoughts.
Ryan: Well, I'm definitely gonna be minting this one. That's for sure. And I hope the AI doesn't kill me for it in the future. This is a pretty unique episode, in my mind, David. This is one that caught me by the most surprise I think of anything we recorded. In that, we had an agenda, and then it took a different direction. It was a very interesting direction to pursue but what I wasn't quite ready for. I went and I spent the weekend actually I listened this episode again. I actually enjoy hearing it more than I think I enjoyed actually recording it, for whatever reason – some of the things sunk in a little bit better – but how did you receive it on the other side of this, David?
David: Yeah, so the AI alignment problem was like a rabbit hole that I remember going down in my like, I think like college days. And so this is always like something that I had in the back of my head. And I think that's that's like why you and I have reacted differently to it. And I know you you went down the rabbit hole too. But it's like it was just something that I thought kind of everyone knew about and we just all understood that it was like, futile. It was like a thought experiment that was futile to really like reason about because there is no solu...