Notes by Retraice

Re32-NOTES.pdf


Listen Later

(The below text version of the notes is for search purposes and convenience. See the PDF version for proper formatting such as bold, italics, etc., and graphics where applicable. Copyright: 2022 Retraice, Inc.)

Re32: AI News

Retraice^1

What our new machine overlords are up to.

Air date: Thursday, 27th Oct. 2022, 11:00 PM Eastern/US.

Let's recap what we said about I.J. Good yesterday, then explain what a model is, then read the AI news and see what happens.

I.J. Good, one more time

Re30^2 follow up: Is it a paradox, irony, false hypotheses?^3 o (p) We have to build AGI to survive. o (q) We have to cede control if we build AGI. o (r) We have to cede control to survive.

The problem is that this is not a good `survival': either we end up without control (in zoos?), or we end up dead soon after `surviving' thanks to the AGI.

And this problem is different from de Garis's artilect war, which is based on the `ought'-based disagreement between cosmists and terrans:

"I believe that the 21st century will be dominated by the question as to whether humanity should or should not build artilects, i.e. machines of godlike intelligence, trillions of trillions of times above the human level. I see humanity splitting into two major political groups, which in time will become increasingly bitterly opposed, as the artilect issue becomes more real and less science fiction like."^4 [emphasis added]

Compare this to Good's `is' argument:

"The survival of man [does] depend on ... construction...", "[T]here would ...be an `intelligence explosion...."'^5

And, of course, many people will not accept that the AGIs are possible or, if they are, will be bad.^6

What's a model?

We've talked about world models and threat models, so what's our definition? We're not using it in a technical sense.^7 * A model is a simplified representation of some part of the world. We make a model because it improves our chances of correctly predicting (guessing about) the future. * What are we looking for? Information, "a distinction [in what we can sense] that makes a difference" in the world beyond our senses.^8 Stuart Russell offered this explanation of information in conversation with Sam Harris:

"I think everyone understands that out there is a world, the real world, and we don't know everything about the real world. So it could be one way or it could be another. In fact it could be--there's a gazillion different ways the world could be. You know, all the cars are out there parked could be parked in different places and I wouldn't even know it. So there are many, many ways the world could be and information is just something that tells you a little bit more about what the world is, which way is the real world out of all the possibilities that it could be. And as you get more and more information about the world--typically we get it through our eyes and ears and increasingly we're getting it through the internet--then that information helps to narrow down the ways that the real world could be. And Shannon, who was an electrical engineer at MIT, figured out a way to actually quantify the amount of information. So if you think about a coin flip, if I can tell you which way that coin is going to come out, heads or tails, than that's one bit of information. And so that gives you the answer for a binary choice between two things. And so from information theory we have wireless communication, we have the internet, we have all the things that allow computers to talk to each other through physical mediums. So information theory has been in some sense the complement or the handmaiden of computation, allowing the whole information revolution to happen." ^9 * "Intelligence exploits redundancy [in information] to make predictions more certain." We want to "improve the reliability of predictions by exploiting the redundancy [compressibility?] of sensory messages--in other words, ... guess right."^10

So lets say models are "a simplified representation, of some part of the world, for a certain purpose, usually an intelligent purpose."

AI news

We'll be using a weekly AI newsletter that's being generated, it seems, by an iOS developer and his email newsletter curation software. Not much information is available about the curation of the newsletter. But it, and its articles, all serve a lot of ads. There are many such newsletters.

AI Weekly by Essentials, #300

* Artificial Intelligence: the coming tsunami, aecmag.com: something to do with `BIM workflows'. What is BIM?

"Building Information Modeling (BIM) is the holistic process of creating and managing information for a built asset. Based on an intelligent model and enabled by a cloud platform, BIM integrates structured, multi-disciplinary data to produce a digital representation of an asset across its lifecycle, from planning and design to construction and operations."^11

AI Weekly by Essentials, #299

* New European Political Party Is Led by an Artificial Intelligence, futurism.com: The party's "head honcho, Leader Lars, is actually an AI chatbot, and all of its policies are AI-derived." Consider: Two big aspects of modern AI are predictions^12 and decisions.^13 * New, transparent AI tool may help detect blood poisoning, arstechnica.com: "The algorithm scans electronic records and may reduce sepsis deaths." Consider: This is `why' we have to build AGI to survive: in this case, we're talking about an individual's survival, or the partial survival of that individual's capacities.^14 But when does building AI become more loss than gain? When do we stop, or how do we steer, on the path from `we need this to fix X' to `what we built is now in control of everything'? At what point do tools become creatures? We at Retraice do not have a strong opinion either way at this point; both arguments seem compelling. It's crucial to remember that `catastrophic risk' does not just apply to global catastrophic risk^15 : losing five minutes of time is catastrophic to the `micro' part of a life; losing a limb is catastrophic to the `partial' part of a life; dying is catastrophic to individual and local (i.e. family, friends, colleagues) parts of life. * 6 Reactions to the White House's AI Bill of Rights ieee.org: "It's not what you might think--it doesn't give artificial-intelligence systems the right to free speech (thank goodness) or to carry arms (double thank goodness), nor does it bestow any other rights upon AI entities. Instead, it's a nonbinding framework for the rights that we old-fashioned human beings should have in relationship to AI systems." This seems unlikely to become anything, given the B&R (blue and red) politics, strategic intelligence and game theory competition that would affect the passing of such legislation. Politicians, the companies who lobby them, and political parties would all have to be on the same page. The thing that's driving AI is the demand for what it can do now, and will be able to do soon, and the money to be made by satisfying that demand. That is a very powerful force moving against any opposing forces for legislation. And legislation is never global, only country-or-block specific. * Inside effective altruism, where the far future counts a lot more than the present, MIT technologyreview.com: "The giving philosophy, which has adopted a focus on the long term, is a conservative project, consolidating decision-making among a small set of technocrats." This is relevant to our `good model', RTFM^16 , and Bostrom's `cosmic endowment'.^17

__

References

Agrawal, A., Gans, J., & Goldfarb, A. (2018). Prediction Machines: The Simple Economics of Artificial Intelligence. Harvard Business Review Press. ISBN: 978-1633695672. Searches: https://www.amazon.com/s?k=978-1633695672 https://www.google.com/search?q=isbn+978-1633695672 https://lccn.loc.gov/2017049211

Barlow, H. B. (2004). Guessing and intelligence. (pp. 382-384). In Gregory (2004).

Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford. First published in 2014. Citations are from the pbk. edition, 2016. ISBN: 978-0198739838. Searches: https://www.amazon.com/s?k=978-0198739838 https://www.google.com/search?q=isbn+978-0198739838 https://lccn.loc.gov/2015956648

Bostrom, N., & Cirkovic, M. M. (Eds.) (2008). Global Catastrophic Risks. Oxford University Press. ISBN: 978-0199606504. Searches: https://www.amazon.com/s?k=978-0199606504 https://www.google.com/search?q=isbn+978-0199606504 https://lccn.loc.gov/2008006539

de Garis, H. (2005). The Artilect War: Cosmists vs. Terrans: A Bitter Controversy Concerning Whether Humanity Should Build Godlike Massively Intelligent Machines. ETC Publications. ISBN: 0882801546. Searches: https://www.amazon.com/s?k=0882801546 https://www.google.com/search?q=isbn+0882801546

Floridi, L. (2010). Information : A Very Short Introduction. Oxford. ISBN: 978-0199551378. Searches: https://www.amazon.com/s?k=9780199551378 https://www.google.com/search?q=isbn+9780199551378 https://lccn.loc.gov/2009941599

Good, I. J. (1965). Speculations concerning the first ultraintelligent machine. Advances in Computers, 6, 31-88. https://exhibits.stanford.edu/feigenbaum/catalog/gz727rg3869 Retrieved 27th Oct. 2020.

Gregory, R. L. (Ed.) (2004). The Oxford Companion to the Mind. Oxford University Press, 2nd ed. ISBN: 0198662246. Searches: https://www.amazon.com/s?k=0198662246 https://www.google.com/search?q=isbn+0198662246 https://lccn.loc.gov/2004275127

Ha, D., & Schmidhuber, J. (2018). World models. arxiv.org. [Submitted on 27 Mar 2018 (v1), last revised 9 May 2018 (this version, v4)] https://doi.org/10.48550/arXiv.1803.10122 Retrieved 19th Oct. 2022.

Retraice (2022/10/23). Re27: Now That's a World Model - WM4. retraice.com. https://www.retraice.com/segments/re27 Retrieved 24th Oct. 2022.

Retraice (2022/10/24). Re28: What's Good? RTFM. retraice.com. https://www.retraice.com/segments/re28 Retrieved 25th Oct. 2022.

Retraice (2022/10/26). Re30: AI Progress and Surrender. retraice.com. https://www.retraice.com/segments/re30 Retrieved 27th Oct. 2022.

Russell, S., & Norvig, P. (2020). Artificial Intelligence: A Modern Approach. Pearson, 4th ed. ISBN: 978-0134610993. Searches: https://www.amazon.com/s?k=978-0134610993 https://www.google.com/search?q=isbn+978-0134610993 https://lccn.loc.gov/2019047498

Schneier, B. (2000). Secrets and Lies: Digital Security in a Networked World. Wiley. ISBN: 0471453803. Searches: https://www.amazon.com/s?k=0471453803 https://www.google.com/search?q=isbn+0471453803 https://lccn.loc.gov/00042252

Weston, A. (2000). A Rulebook for Arguments. Hackett, 3rd ed. ISBN: 0872205525. Also available at: https://archive.org/details/rulebookforargum00west_3 Searches: https://www.amazon.com/s?k=0872205525 https://www.google.com/search?q=isbn+0872205525 https://lccn.loc.gov/00058121

Footnotes

^1 https://www.retraice.com/retraice

^2 Retraice (2022/10/26).

^3 Spoiler: On a future segment, we'll describe the problems with this thinking as (a) reasoning from too little evidence, and (b) failure to consider alternatives, `the two great fallacies'. See Weston (2000) pp. 71-72.

^4 de Garis (2005) p. 11.

^5 Good (1965) pp. 31-33.

^6 This gets to Weston's two great fallacies.

^7 We got `threat model' from Schneier (2000) chpt. 19, but it's not defined technically there. `World model' is used in AI sometimes (Ha & Schmidhuber (2018), Russell & Norvig (2020) p. 848), but that's not how we're using it.

^8 Donald MacKay quoted in Floridi (2010) p. 23, who also notes that Gregory Bateson's "difference which makes a difference" formulation is better known, though less accurate.

^9 Waking Up With Sam Harris #53 The Dawn of Artificial Intelligence. Nov. 23, 2016, from 0:07:06.

^10 Barlow (2004) pp. 383-384.

^11 Autodesk, a design and engineering software company.

^12 Agrawal et al. (2018).

^13 Russell & Norvig (2020) chpts. 16, 17, 18.

^14 See Retraice (2022/10/23) on micro, partial and individual threat modeling.

^15 Bostrom & Cirkovic (2008).

^16 Retraice (2022/10/24)

^17 Bostrom (2014) pp. 122-123.

...more
View all episodesView all episodes
Download on the App Store

Notes by RetraiceBy Retraice, Inc.