Notes by Retraice

Re30-NOTES.pdf


Listen Later

(The below text version of the notes is for search purposes and convenience. See the PDF version for proper formatting such as bold, italics, etc., and graphics where applicable. Copyright: 2022 Retraice, Inc.)

Re30: AI Progress and Surrender

Retraice^1

Technology might become an unacceptable solution (to anything).

Air date: Wednesday, 26th Oct. 2022, 11:45 PM Eastern/US.

Threat models entail building AI?

The threat models (micro, partial, individual, local, global)^2 all want us to know more and go faster. On each level, we can easily see that, eventually, we're doomed, unless we know more and go faster.

Knowing and going are limited in humans, but not machines.

This suggests I.J. Good's premise that:

"The survival of man depends on the early construction of an ultraintelligent machine."^3

(And this is to say nothing of the human subgroup competition, i.e. `The survival of my group depends on the earlier construction of an ultraintelligent machine.' This is race dynamics.^4)

Building AI entails surrender of control?

But he also says that, after building the the first ultraintelligent machine,

"there would then unquestionably be an `intelligence explosion,' and the intelligence of man would be left far behind.... Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control."

This is not human control. This is computer control.^5

Threat models entail surrender of control?

If so, it's unacceptable.

The decision is between stopping technology^6 and surrendering our freedom and fate to it.^7

It's easy to accept that, eventually, humanity will end. All species, like all good (and bad) things, must come to an end. But what if it's set to happen soon? We usually don't think this way about it.^8

(And why are we ok with it later, for others living at a later time? What about those others? And what about the gajillions of human lives that might be possible^9 if we play our cards right now? This gets right to the point of the `good model', RTFM, specifically the R(ight) part of it.^10)

Do we accept this? If not, what actions are required of us?

Do we care about control, or something less than control?

Are we ok with living in a world not dominated by human control? A zoo? The Matrix?

How can we choose without seeing the alternative?

What is being done?

Secrets are real.^11 The whole problem of artificial superintelligence was laid out by 2014^12 , started by von Neumann (to Ulam^13 , 1950s) or I.J. Good (1962-63) and, like global warming, was understood by many people long before the public discussion of it.^14

But the world isn't filled with authors. It's filled with primates. Primates do ponder, but they do other things too.

As outsiders, we (and you?) cannot speak to what insiders are doing. But it's a safe bet they are doing.

_

References

Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford. First published in 2014. Citations are from the pbk. edition, 2016. ISBN: 978-0198739838. Searches: https://www.amazon.com/s?k=978-0198739838 https://www.google.com/search?q=isbn+978-0198739838 https://lccn.loc.gov/2015956648

Butler, S. (1863). Darwin among the machines. The Press (Canterbury, New Zealand). Reprinted in Butler et al. (1923).

Butler, S., Jones, H., & Bartholomew, A. (1923). The Shrewsbury Edition of the Works of Samuel Butler Vol. 1. J. Cape. No ISBN. https://books.google.com/books?id=B-LQAAAAMAAJ Retrieved 27th Oct. 2020.

Franta, B. (2021). Early oil industry disinformation on global warming. Environmental Politics, 30(4), 663-668. 5 Jan. 2021. https://www.tandfonline.com/doi/full/10.1080/09644016.2020.1863703 Retrieved 27th Oct. 2022.

Good, I. J. (1965). Speculations concerning the first ultraintelligent machine. Advances in Computers, 6, 31-88. https://exhibits.stanford.edu/feigenbaum/catalog/gz727rg3869 Retrieved 27th Oct. 2020.

Kurzweil, R. (2005). The Singularity Is Near: When Humans Transcend Biology. Penguin. ISBN: 978-0143037880. Searches: https://www.amazon.com/s?k=978-0143037880 https://www.google.com/search?q=isbn+978-0143037880 https://lccn.loc.gov/2004061231

Retraice (2020/09/07). Re1: Three Kinds of Intelligence. retraice.com. https://www.retraice.com/segments/re1 Retrieved 22nd Sep. 2020.

Retraice (2020/10/28). Re8: Strange Machines. retraice.com. https://www.retraice.com/segments/re8 Retrieved 29th Oct. 2020.

Retraice (2022/10/19). Re22: Computer Control. retraice.com. https://www.retraice.com/segments/re22 Retrieved 19th Oct. 2022.

Retraice (2022/10/23). Re27: Now That's a World Model - WM4. retraice.com. https://www.retraice.com/segments/re27 Retrieved 24th Oct. 2022.

Retraice (2022/10/24). Re28: What's Good? RTFM. retraice.com. https://www.retraice.com/segments/re28 Retrieved 25th Oct. 2022.

Retraice (2022/10/25). Re29: The News and World Model 4. retraice.com. https://www.retraice.com/segments/re29 Retrieved 26th Oct. 2022.

Russell, S. (2019). Human Compatible: Artificial Intelligence and the Problem of Control. Viking. ISBN: 978-0525558613. Searches: https://www.amazon.com/s?k=978-0525558613 https://www.google.com/search?q=isbn+978-0525558613 https://lccn.loc.gov/2019029688

Ulam, S. (1958). John von Neumann 1903-1957. Bull. Amer. Math. Soc., 64, 1-49. https://doi.org/10.1090/S0002-9904-1958-10189-5 Retrieved 29th Oct. 2020.

Yudkowsky, E. (2013). Intelligence explosion microeconomics. Machine Intelligence Research Institute. Technical report 2013-1. https://intelligence.org/files/IEM.pdf Retrieved ca. 9th Dec. 2018.

Yudkowsky, E. (2017). There's no fire alarm for artificial general intelligence. Machine Intelligence Research Institute. 13th Oct. 2017. https://intelligence.org/2017/10/13/fire-alarm/ Retrieved 9th Dec. 2018.

Footnotes

^1 https://www.retraice.com/retraice

^2 Retraice (2022/10/23).

^3 Good (1965) p. 31.

^4 Bostrom (2014) pp. 98-99.

^5 Retraice (2022/10/19).

^6 The Ted Kaczynski / Unabomber approach is similar to the Samuel Butler (1863) approach.

^7 This is roughly the Kurzweil (2005) approach.

^8 The r/collapse subreddit people are a notable exception.

^9 Bostrom (2014) p. 123.

^10 Retraice (2022/10/24).

^11 Retraice (2022/10/25), `The first rule of secrecy is: Nothing on paper.' See also Retraice (2020/09/07).

^12 Bostrom (2014); see also Yudkowsky (2013), Yudkowsky (2017), Russell (2019), among many others.

^13 Ulam (1958); see also Retraice (2020/10/28).

^14 Franta (2021). See also: `Frontline' Review: Why the Climate Changed but We Didn't, New York Times, April 18, 2022.

...more
View all episodesView all episodes
Download on the App Store

Notes by RetraiceBy Retraice, Inc.