
Sign up to save your podcasts
Or
The 4 Cs of Superintelligence is a framework that casts fresh light on the vexing question of possible outcomes of humanity's interactions with an emerging superintelligent AI. The 4 Cs are Cease, Control, Catastrophe, and Consent. In this episode, the show's co-hosts, Calum Chace and David Wood, debate the pros and cons of the first two of these Cs, and lay the groundwork for a follow-up discussion of the pros and cons of the remaining two.
Topics addressed in this episode include:
*) Reasons why superintelligence might never be created
*) Timelines for the arrival of superintelligence have been compressed
*) Does the unpredictability of superintelligence mean we shouldn't try to consider its arrival in advance?
*) Two "big bangs" have caused dramatic progress in AI; what might the next such breakthrough bring?
*) The flaws in the "Level zero futurist" position
*) Two analogies contrasted: overcrowding on Mars , and travelling to Mars without knowing what we'll breathe when we'll get there
*) A startling illustration of the dramatic power of exponential growth
*) A concern for short-term risk is by no means a reason to pay less attention to longer-term risks
*) Why the "Cease" option is looking more credible nowadays than it did a few years ago
*) Might "Cease" become a "Plan B" option?
*) Examples of political dictators who turned away from acquiring or using various highly risky weapons
*) Challenges facing a "Turing Police" who monitor for dangerous AI developments
*) If a superintelligence has agency (volition), it seems that "Control" is impossible
*) Ideas for designing superintelligence without agency or volition
*) Complications with emergent sub-goals (convergent instrumental goals)
*) A badly configured superintelligent coffee fetcher
*) Bad actors may add agency to a superintelligence, thinking it will boost its performance
*) The possibility of changing social incentives to reduce the dangers of people becoming bad actors
*) What's particularly hard about both "Cease" and "Control" is that they would need to remain in place forever
*) Human civilisations contain many diametrically opposed goals
*) Going beyond the statement of "Life, liberty, and the pursuit of happiness" to a starting point for aligning AI with human values?
*) A cliff-hanger ending
The survey "Key open questions about the transition to AGI" can be found at https://transpolitica.org/projects/key-open-questions-about-the-transition-to-agi/
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Listen on: Apple Podcasts Spotify
5
88 ratings
The 4 Cs of Superintelligence is a framework that casts fresh light on the vexing question of possible outcomes of humanity's interactions with an emerging superintelligent AI. The 4 Cs are Cease, Control, Catastrophe, and Consent. In this episode, the show's co-hosts, Calum Chace and David Wood, debate the pros and cons of the first two of these Cs, and lay the groundwork for a follow-up discussion of the pros and cons of the remaining two.
Topics addressed in this episode include:
*) Reasons why superintelligence might never be created
*) Timelines for the arrival of superintelligence have been compressed
*) Does the unpredictability of superintelligence mean we shouldn't try to consider its arrival in advance?
*) Two "big bangs" have caused dramatic progress in AI; what might the next such breakthrough bring?
*) The flaws in the "Level zero futurist" position
*) Two analogies contrasted: overcrowding on Mars , and travelling to Mars without knowing what we'll breathe when we'll get there
*) A startling illustration of the dramatic power of exponential growth
*) A concern for short-term risk is by no means a reason to pay less attention to longer-term risks
*) Why the "Cease" option is looking more credible nowadays than it did a few years ago
*) Might "Cease" become a "Plan B" option?
*) Examples of political dictators who turned away from acquiring or using various highly risky weapons
*) Challenges facing a "Turing Police" who monitor for dangerous AI developments
*) If a superintelligence has agency (volition), it seems that "Control" is impossible
*) Ideas for designing superintelligence without agency or volition
*) Complications with emergent sub-goals (convergent instrumental goals)
*) A badly configured superintelligent coffee fetcher
*) Bad actors may add agency to a superintelligence, thinking it will boost its performance
*) The possibility of changing social incentives to reduce the dangers of people becoming bad actors
*) What's particularly hard about both "Cease" and "Control" is that they would need to remain in place forever
*) Human civilisations contain many diametrically opposed goals
*) Going beyond the statement of "Life, liberty, and the pursuit of happiness" to a starting point for aligning AI with human values?
*) A cliff-hanger ending
The survey "Key open questions about the transition to AGI" can be found at https://transpolitica.org/projects/key-open-questions-about-the-transition-to-agi/
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Listen on: Apple Podcasts Spotify
26,334 Listeners
1,000 Listeners
1,876 Listeners
101 Listeners
1,447 Listeners
7,859 Listeners
4,107 Listeners
282 Listeners
8,761 Listeners
353 Listeners
395 Listeners
464 Listeners
85 Listeners
196 Listeners
73 Listeners