
Sign up to save your podcasts
Or


Consider this largely a follow-up to Friday's post about a statement aimed at creating common knowledge around it being unwise to build superintelligence any time soon.
Mainly, there was a great question asked, so I gave a few hour shot at writing out my answer. I then close with a few other follow-ups on issues related to the statement.
A Great Question To Disentangle
There are some confusing wires potentially crossed here but the intent is great.
Scott Alexander: I think removing a 10% chance of humanity going permanently extinct is worth another 25-50 years of having to deal with the normal human problems the normal way.
Sriram Krishnan: Scott what are verifiable empirical things ( model capabilities / incidents / etc ) that would make you shift that probability up or down over next 18 months?
I went through three steps interpreting [...]
---
Outline:
(00:30) A Great Question To Disentangle
(02:20) Scott Alexander Gives a Fast Answer
(04:53) Question 1: What events would most shift your p(doom | ASI) in the next 18 months?
(13:01) Question 1a: What would get this risk down to acceptable levels?
(14:48) Question 2: What would shift the amount that stopping us from creating superintelligence for a potentially extended period would reduce p(doom)?
(17:01) Question 3: What would shift your timelines to ASI (or to sufficiently advanced AI, or 'to crazy')?
(20:01) Bonus Question 1: Why Do We Keep Having To Point Out That Building Superintelligence At The First Possible Moment Is Not A Good Idea?
(22:44) Bonus Question 2: What Would a Treaty On Prevention of Artificial Superintelligence Look Like?
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
By zvi5
22 ratings
Consider this largely a follow-up to Friday's post about a statement aimed at creating common knowledge around it being unwise to build superintelligence any time soon.
Mainly, there was a great question asked, so I gave a few hour shot at writing out my answer. I then close with a few other follow-ups on issues related to the statement.
A Great Question To Disentangle
There are some confusing wires potentially crossed here but the intent is great.
Scott Alexander: I think removing a 10% chance of humanity going permanently extinct is worth another 25-50 years of having to deal with the normal human problems the normal way.
Sriram Krishnan: Scott what are verifiable empirical things ( model capabilities / incidents / etc ) that would make you shift that probability up or down over next 18 months?
I went through three steps interpreting [...]
---
Outline:
(00:30) A Great Question To Disentangle
(02:20) Scott Alexander Gives a Fast Answer
(04:53) Question 1: What events would most shift your p(doom | ASI) in the next 18 months?
(13:01) Question 1a: What would get this risk down to acceptable levels?
(14:48) Question 2: What would shift the amount that stopping us from creating superintelligence for a potentially extended period would reduce p(doom)?
(17:01) Question 3: What would shift your timelines to ASI (or to sufficiently advanced AI, or 'to crazy')?
(20:01) Bonus Question 1: Why Do We Keep Having To Point Out That Building Superintelligence At The First Possible Moment Is Not A Good Idea?
(22:44) Bonus Question 2: What Would a Treaty On Prevention of Artificial Superintelligence Look Like?
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

26,399 Listeners

2,460 Listeners

1,096 Listeners

108 Listeners

290 Listeners

88 Listeners

566 Listeners

5,600 Listeners

137 Listeners

14 Listeners

144 Listeners

153 Listeners

458 Listeners

0 Listeners

142 Listeners