
Sign up to save your podcasts
Or


(The below text version of the notes is for search purposes and convenience. See the PDF version for proper formatting such as bold, italics, etc., and graphics where applicable. Copyright: 2022 Retraice, Inc.)
Re54: Implications and Endgames
Retraice^1
WAAIT Part 5: There are constraints on what we can believe and what we should do.
Air date: Friday, 18th Nov. 2022, 11:00 PM Eastern/US.
The big questions we've asked
o What about AI though? o What is AI again? o How does AI mix with other things? o How much do I matter, individually? o As a species: What are we doing? Where are we headed? o As players:^2 How to win? o What can't we see? o What about cumulative progress? o What about superpowers and secrecy?
Some answers are more clear than others
* AI is 3D: space, time, value. * AI is hard to define. * AI is already mixed in, and will only become more so. * We could be headed for great or terrible things. * Ignorant, incapable individuals will not affect where we're going. Think yap, yap, yap vs. typa typa typa. * Players in the game will focus on strategies and tactics. * We're already living inside a world shaped by `strategic intelligence' (espionage). * Secrets have been kept. * Secret progress would be cumulative. * AI is a new superpower in the mix.
auto
Belief and action implications
Belief implications:^3 * records and claims are evidence, but... * hard evidence is better: things that are true at all times, places and values:^4 Think math, logic, technology, artifacts, and to some extent empirical science.
Action implications: * Players: Cause good, prevent bad.^5 * Individuals: If you have a choice, choose your work wisely. Outside forces in the form of coalitions will, to greater and lesser extents, affect the game. But there are many positions in the world that will not be part of any such force. * Humanity: Do or die,^6 and prevent a singleton:^7 We won't be humanity if we're cells in a singleton, beholden to a larger entity. Alternatively, if a singleton is necessary or unavoidable, aim for a happy healthy one?
Endgames: skeptical optimism; player orientation
* AI is trying to be great. The propaganda video series put out by DeepMind recently, called `AI by me', is, even through a true skeptic's eyes, inspiring and hopeful stuff. * We can blow it. * Players win games. * It's going to be hard.
_
References
Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford. First published in 2014. Citations are from the pbk. edition, 2016. ISBN: 978-0198739838. Searches: https://www.amazon.com/s?k=978-0198739838 https://www.google.com/search?q=isbn+978-0198739838 https://lccn.loc.gov/2015956648
Ord, T. (2020). The Precipice: Existential Risk and the Future of Humanity. Hachette. ISBN: 978-0316484916. Searches: https://www.amazon.com/s?k=978-0316484916 https://www.google.com/search?q=isbn+978-0316484916 https://lccn.loc.gov/2019956459
Rees, M. (2003). Our Final Hour: A Scientist's Warning. Basic Books. ISBN: 0465068634. Searches: https://www.amazon.com/s?k=0465068634 https://www.google.com/search?q=isbn+0465068634 https://lccn.loc.gov/2004556001
Russell, S. (2019). Human Compatible: Artificial Intelligence and the Problem of Control. Viking. ISBN: 978-0525558613. Searches: https://www.amazon.com/s?k=978-0525558613 https://www.google.com/search?q=isbn+978-0525558613 https://lccn.loc.gov/2019029688
Footnotes
^1 https://www.retraice.com/retraice
^2 Players: the people in the DeepMind and OpenAI buildings in the rooms with the whiteboards. They're the players.
^3 Belief is a matter of degree, and it's not obvious that we can choose our beliefs, if we have control over them at all.
^4 The problem of deciding what's `true' of human values is a huge topic. See: Russell (2019) chpt. 9; Bostrom (2014) chpt. 12. The main point is that the people who happen to be working at the frontiers of the most important technologies are competent at that work, but likely not at representing humanity collectively, especially morally and ethically.
^5 This is something like the Hippocratic Oath.
^6 See Ord (2020); Rees (2003). Side note: `Environmentalism' should account for the fact that computer control, AI, and also things like the solar system are part of the human environment, and therefore should be taken very seriously as threats to our future. But this is really a matter of definition.
^7 "[A] world order in which there is at the global level a single decision-making agency"^8
By Retraice, Inc.(The below text version of the notes is for search purposes and convenience. See the PDF version for proper formatting such as bold, italics, etc., and graphics where applicable. Copyright: 2022 Retraice, Inc.)
Re54: Implications and Endgames
Retraice^1
WAAIT Part 5: There are constraints on what we can believe and what we should do.
Air date: Friday, 18th Nov. 2022, 11:00 PM Eastern/US.
The big questions we've asked
o What about AI though? o What is AI again? o How does AI mix with other things? o How much do I matter, individually? o As a species: What are we doing? Where are we headed? o As players:^2 How to win? o What can't we see? o What about cumulative progress? o What about superpowers and secrecy?
Some answers are more clear than others
* AI is 3D: space, time, value. * AI is hard to define. * AI is already mixed in, and will only become more so. * We could be headed for great or terrible things. * Ignorant, incapable individuals will not affect where we're going. Think yap, yap, yap vs. typa typa typa. * Players in the game will focus on strategies and tactics. * We're already living inside a world shaped by `strategic intelligence' (espionage). * Secrets have been kept. * Secret progress would be cumulative. * AI is a new superpower in the mix.
auto
Belief and action implications
Belief implications:^3 * records and claims are evidence, but... * hard evidence is better: things that are true at all times, places and values:^4 Think math, logic, technology, artifacts, and to some extent empirical science.
Action implications: * Players: Cause good, prevent bad.^5 * Individuals: If you have a choice, choose your work wisely. Outside forces in the form of coalitions will, to greater and lesser extents, affect the game. But there are many positions in the world that will not be part of any such force. * Humanity: Do or die,^6 and prevent a singleton:^7 We won't be humanity if we're cells in a singleton, beholden to a larger entity. Alternatively, if a singleton is necessary or unavoidable, aim for a happy healthy one?
Endgames: skeptical optimism; player orientation
* AI is trying to be great. The propaganda video series put out by DeepMind recently, called `AI by me', is, even through a true skeptic's eyes, inspiring and hopeful stuff. * We can blow it. * Players win games. * It's going to be hard.
_
References
Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford. First published in 2014. Citations are from the pbk. edition, 2016. ISBN: 978-0198739838. Searches: https://www.amazon.com/s?k=978-0198739838 https://www.google.com/search?q=isbn+978-0198739838 https://lccn.loc.gov/2015956648
Ord, T. (2020). The Precipice: Existential Risk and the Future of Humanity. Hachette. ISBN: 978-0316484916. Searches: https://www.amazon.com/s?k=978-0316484916 https://www.google.com/search?q=isbn+978-0316484916 https://lccn.loc.gov/2019956459
Rees, M. (2003). Our Final Hour: A Scientist's Warning. Basic Books. ISBN: 0465068634. Searches: https://www.amazon.com/s?k=0465068634 https://www.google.com/search?q=isbn+0465068634 https://lccn.loc.gov/2004556001
Russell, S. (2019). Human Compatible: Artificial Intelligence and the Problem of Control. Viking. ISBN: 978-0525558613. Searches: https://www.amazon.com/s?k=978-0525558613 https://www.google.com/search?q=isbn+978-0525558613 https://lccn.loc.gov/2019029688
Footnotes
^1 https://www.retraice.com/retraice
^2 Players: the people in the DeepMind and OpenAI buildings in the rooms with the whiteboards. They're the players.
^3 Belief is a matter of degree, and it's not obvious that we can choose our beliefs, if we have control over them at all.
^4 The problem of deciding what's `true' of human values is a huge topic. See: Russell (2019) chpt. 9; Bostrom (2014) chpt. 12. The main point is that the people who happen to be working at the frontiers of the most important technologies are competent at that work, but likely not at representing humanity collectively, especially morally and ethically.
^5 This is something like the Hippocratic Oath.
^6 See Ord (2020); Rees (2003). Side note: `Environmentalism' should account for the fact that computer control, AI, and also things like the solar system are part of the human environment, and therefore should be taken very seriously as threats to our future. But this is really a matter of definition.
^7 "[A] world order in which there is at the global level a single decision-making agency"^8