The Nonlinear Library

LW - Bostrom Goes Unheard by Zvi


Listen Later

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Bostrom Goes Unheard, published by Zvi on November 13, 2023 on LessWrong.
[Editor's Note: This post is split off from AI #38 and only on LessWrong because I want to avoid overloading my general readers with this sort of thing at this time, and also I think it is potentially important we have a link available. I plan to link to it from there with a short summary.]
Nick Bostrom was interviewed on a wide variety of questions on UnHerd, primarily on existential risk and AI, I found it thoughtful throughout. In it, he spent the first 80% of the time talking about existential risk. Then in the last 20% he expressed the concern that it was unlikely but possible we would overshoot our concerns about AI and never build AGI at all, which would be a tragedy.
How did those who would dismiss AI risk and build AGI as fast as possible react?
About how you would expect. This is from a Marginal Revolution links post.
Tyler Cowen: Nick Bostrom no longer the Antichrist.
The next link in that post was to the GPT-infused version of Rohit Krishnan's book about AI, entitled Creating God (should I read it?).
What exactly changed? Tyler links to an extended tweet from Jordan Chase-Young, mostly a transcript from the video, with a short introduction.
Jordan Chase-Young: FINALLY: AI x-risker Nick Bostrom regrets focusing on AI risk, now worries that our fearful herd mentality will drive us to crush AI and destroy our future potential. (from an UnHerd podcast today).
In other words, Nick Bostrom previously focused on the fact that AI might kill everyone, thought that was bad actually, and attempted to prevent it. But now the claim is that Bostrom regrets this - he repented.
The context is that Peter Thiel, who warns that those warning about existential risk have gone crazy, has previously on multiple occasions referred seemingly without irony to Nick Bostrom as the Antichrist. So perhaps now Peter and others who agree will revise their views? And indeed, there was much 'one of us' talk.
Frequently those who warn of existential risk from AI are told they are saying something religious, are part of a cult, or are pattern matching to the Christian apocalypse, usually as justification for dismissing our concerns without argument.
The recent exception on the other side that proves the rule was Byrne Hobart, author of the excellent blog The Diff, who unlike most concerned about existential risk is explicitly religious and gave a talk about this at a religious conference. Then Dr. Jonathan Askonas, who gave a talk as well, notes he is an optimist skeptical of AI existential risk, and also draws the parallels, and talks about 'the rationality of the Antichrist's agenda.'
Note who actually uses such language, and both the symmetries and asymmetries.
Was Jordan's statement a fair description of what was said by Bostrom?
Mu. Both yes and no would be misleading answers.
His statement is constructed so as to imply something stronger than is present. I would not go so far as to call it 'lying' but I understand why so many responses labeled it that. I would instead call the description highly misleading, especially in light of the rest of the podcast and sensible outside context. But yes, Under the rules of Bounded Distrust, this is a legal move one can make, based on the text quoted. You are allowed to be this level of misleading. And I thank him for providing the extended transcript.
Similarly and reacting to Jordan, here is Louis Anslow saying Bostrom has 'broken ranks,' and otherwise doing his best to provide a maximally sensationalist reading (scare words in bold red!) while staying within the Bounded Distrust rules. Who are the fearmongers, again?
Jordan Chase-Young then quotes at length from the interview, bold is his everywhere.
To avoid any confusion, and because it was a thoughtful discussion worth ...
...more
View all episodesView all episodes
Download on the App Store

The Nonlinear LibraryBy The Nonlinear Fund

  • 4.6
  • 4.6
  • 4.6
  • 4.6
  • 4.6

4.6

8 ratings