Share Notes by Retraice
Share to email
Share to Facebook
Share to X
By Retraice, Inc.
The podcast currently has 147 episodes available.
(The below text version of the notes is for search purposes and convenience. See the PDF version for proper formatting such as bold, italics, etc., and graphics where applicable. Copyright: 2023 Retraice, Inc.)
Re116: When Does the Bad Thing Happen? (Technological Danger, Part 4)
retraice.com
Agreements about reality in technological progress. Basic questions; a chain reaction of philosophy; deciding what is and isn't in the world; agreeing with others in order to achieve sharing; other concerns compete with sharing and prevent agreement; the need for agreement increasing.
Air date: Saturday, 14th Jan. 2023, 10:00 PM Eastern/US.
The chain reaction of questions
We were bold enough to predict a decrease in freedom (without defining it);^1 we were bold enough to define technological progress (with defining it).^2 But in predicting and assessing `bad things' (i.e. technological danger), we should be able to talk about when the bad things might or might not happen, did or didn't happen. But can we? When does anything start and stop? How to draw the lines in chronology? How to draw the lines in causality? There is a chain reaction of questions and subjects: * Time: When did it start? With the act, or the person, or the species? * Space: Where did it start? * Matter: What is it? * Causality: What caused it? * Free will: Do we cause anything, really?
Ontology and treaties for sharing
Ontology is the subset of philosophy that deals with `being', `existence', `reality', the categories of such things, etc. I.e., it's about `what is', or `What is there?', or `the stuff' of the world. From AIMA4e (emphasis added):
"We should say up front that the enterprise of general ontological engineering has so far had only limited success. None of the top AI applications (as listed in Chapter 1) make use of a general ontology--they all use special-purpose knowledge engineering and machine learning. Social/political considerations can make it difficult for competing parties to agree on an ontology. As Tom Gruber (2004) says, `Every ontology is a treaty--a social agreement--among people with some common motive in sharing.' When competing concerns outweigh the motivation for sharing, there can be no common ontology. The smaller the number of stakeholders, the easier it is to create an ontology, and thus it is harder to create a generalpurpose ontology than a limited-purpose one, such as the Open Biomedical Ontology."^3
Prediction: the need for precise ontologies is going to increase.
Ontology is not a solved problem--neither in philosophy nor artificial intelligence. Yet we can't sit around and wait. The computer control game is on. We have to act and act effectively. And further, our need for precise ontologies--that is, the making of treaties--is going to increase because we're going to be dealing with technologies that have more and more precise ontologies. So, consider: * More stakeholders makes treaties less likely; * The problems that we can solve without AI (and its ontologies and our own ontologies) are decreasing; * Precise ontology enables knowledge representation (outside of machine-learning), and therefore AI, and therefore the effective building of technologies and taking of actions, and therefore work to be done; * Treaties can make winners and losers in the computer control game; * Competing concerns can outweigh the motive for sharing, and therefore treaties, and therefore winning.
__
References
Retraice (2023/01/11). Re113: Uncertainty, Fear and Consent (Technological Danger, Part 1). retraice.com. https://www.retraice.com/segments/re113 Retrieved 12th Jan. 2023.
Retraice (2023/01/13). Re115: Technological Progress, Defined (Technological Danger, Part 3). retraice.com. https://www.retraice.com/segments/re115 Retrieved 14th Jan. 2023.
Russell, S., & Norvig, P. (2020). Artificial Intelligence: A Modern Approach. Pearson, 4th ed. ISBN: 978-0134610993. Searches: https://www.amazon.com/s?k=978-0134610993 https://www.google.com/search?q=isbn+978-0134610993 https://lccn.loc.gov/2019047498
Footnotes
^1 Retraice (2023/01/11)
^2 Retraice (2023/01/13)
^3 Russell & Norvig (2020) p. 316. And Gruber's Every Ontology Is a Treaty (2004): https://tomgruber.org/writing/sigsemis-2004
(The below text version of the notes is for search purposes and convenience. See the PDF version for proper formatting such as bold, italics, etc., and graphics where applicable. Copyright: 2023 Retraice, Inc.)
Re115: Technological Progress, Defined (Technological Danger, Part 3)
retraice.com
How we would decide, given predictions, whether to risk continued technological advance. Danger, decisions, advancing and progress; control over the environment and `we'; complex, inconsistent and conflicting human preferences; `coherent extrapolated volition' (CEV); divergence, winners and losers; the lesser value of humans who disagree; better and worse problems; predicting progress and observing progress; learning from predicting progress.
Air date: Friday, 13th Jan. 2023, 10:00 PM Eastern/US.
Progress, `we' and winners
If the question is about `danger', the answer has to be a decision about whether to proceed (advance). But how to think about progress?
Let `advance' mean moving forward, whether or not it's good for humanity. Let `progress' mean moving forward in a way that's good for humanity, by some definition of good.^1
Progress can't be control over the environment, because whose control? (Who is we?) And we can't all control equally or benefit equally or prefer the same thing. This corresponds to the Russell & Norvig (2020) chpt. 27 problems of the complexity and inconsistency of human preferences,^2 and Bostrom (2014) chpt 13 problem of "locking in forever the prejudices and preconceptions of the present generation" (p. 256).
A possible solution is Yudkowsky (2004)'s `coherent extrapolated volition'.^3 If humanity's collective `volition' doesn't converge, this might entail that there has to be a `winner' group in the game of humans vs. humans.
This implies the (arguably obvious) conclusion that we humans value other humans more or less depending on the beliefs and desires they hold.
Better and worse problems can be empirical
Choose between A and B: o carcinogenic bug spray, malaria; o lead in the water sometimes (Flint, MI), fetching pales; o unhappy day job, no home utilities (or home).
Which do you prefer? This is empirical, in that we can ask people. We can't ask people in the past or the future; but we can always ask people in the present to choose between two alternative problems.
Technological progress
First, we need a definition of progress in order to make decisions. Second, we need an answer to the common retort that `technology creates more problems than it solves'. `More' doesn't matter; what matters is whether the new problems, together, are `better' than the old problems, together.
We need to define two timeframes of `progress' because we're going to use the definition to make decisions: one timeframe to classify a technology before the decision to build it, and one timeframe to classify it after it has been built and has had observable effects. It's the difference between expected progress and observed progress. Actual, observed progress can only be determined retrospectively.
Predicted progress:
A technology seems like progress if: the predicted problems it will create are better to have than the predicted problems it will solve, according to the humans alive at the time of prediction.^4
Actual progress:
A technology is progress if: given an interval of time, the problems it created were better to have than the problems it solved, according to the humans alive during the interval.
(The time element is crucial: a technology will be, by definition, progress if up to a moment in history it never caused worse problems than it solved; but once it does cause such problems, it ceases to be progress, by definition.)
Prediction progress (learning):
`Actual progress', if tracked and absorbed, could be used to improve future `predicted progress'.
_
References
Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford. First published in 2014. Citations are from the pbk. edition, 2016. ISBN: 978-0198739838. Searches: https://www.amazon.com/s?k=978-0198739838 https://www.google.com/search?q=isbn+978-0198739838 https://lccn.loc.gov/2015956648
Retraice (2022/10/24). Re28: What's Good? RTFM. retraice.com. https://www.retraice.com/segments/re28 Retrieved 25th Oct. 2022.
Retraice (2023/01/09). Re111: AI and the Gorilla Problem. retraice.com. https://www.retraice.com/segments/re111 Retrieved 10th Jan. 2023.
Russell, S., & Norvig, P. (2020). Artificial Intelligence: A Modern Approach. Pearson, 4th ed. ISBN: 978-0134610993. Searches: https://www.amazon.com/s?k=978-0134610993 https://www.google.com/search?q=isbn+978-0134610993 https://lccn.loc.gov/2019047498
Yudkowsky, E. (2004). Coherent extrapolated volition. Machine Intelligence Research Institute. 2004. https://intelligence.org/files/CEV.pdf Retrieved 13th Jan. 2023.
Footnotes
^1 Retraice (2022/10/24).
^2 Cf. Russell & Norvig (2020) p. 34 and Re111 (Retraice (2023/01/09)).
^3 See also Bostrom (2014) p. 259 ff.
^4 The demonstrated preferences of those humans? The CEV of them? This is hard.
(The below text version of the notes is for search purposes and convenience. See the PDF version for proper formatting such as bold, italics, etc., and graphics where applicable. Copyright: 2023 Retraice, Inc.)
Re114: Visions of Loss (Technological Danger, Part 2)
retraice.com
Human loss of freedom by deference to authority, dependency on machines, and delegation of defense. Wiener: freedom of thought and opinion, and communication, as vital; Russell: diet, injections and injunctions in the future; Horesh: technological behavior modification in the present; terrorist Kaczynski: if AI succeeds, we'll have machine control or elite control, but no freedom; Bostrom: wearable surveillance devices and power in the hands of a very few as solution.
Air date: Thursday, 12th Jan. 2023, 10:00 PM Eastern/US.
All bold emphasis added.
Mathematician Wiener
Is this what's at stake, in the struggle for freedom of thought and communication?
Wiener (1954), p. 217:^1
"I have said before that man's future on earth will not be long unless man rises to the full level of his inborn powers. For us, to be less than a man is to be less than alive. Those who are not fully alive do not live long even in their world of shadows. I have said, moreover, that for man to be alive is for him to participate in a world-wide scheme of communication. It is to have the liberty to test new opinions and to find which of them point somewhere, and which of them simply confuse us. It is to have the variability to fit into the world in more places than one, the variability which may lead us to have soldiers when we need soldiers, but which also leads us to have saints when we need saints. It is precisely this variability and this communicative integrity of man which I find to be violated and crippled by the present tendency to huddle together according to a comprehensive prearranged plan, which is handed to us from above. We must cease to kiss the whip that lashes us...."
p 226: "There is something in personal holiness which is akin to an act of choice, and the word heresy is nothing but the Greek word for choice. Thus your Bishop, however much he may respect a dead Saint, can never feel too friendly toward a living one.
"This brings up a very interesting remark which Professor John von Neumann has made to me. He has said that in modern science the era of the primitive church is passing, and that the era of the Bishop is upon us. Indeed, the heads of great laboratories are very much like Bishops, with their association with the powerful in all walks of life, and the dangers they incur of the carnal sins of pride and of lust for power. On the other hand, the independent scientist who is worth the slightest consideration as a scientist, has a consecration which comes entirely from within himself: a vocation which demands the possibility of supreme self-sacrifice...."
p. 228: "I have indicated that freedom of opinion at the present time is being crushed between the two rigidities of the Church and the Communist Party. In the United States we are in the process [1950] of developing a new rigidity which combines the methods of both while partaking of the emotional fervor of neither. Our Conservatives of all shades of opinion have somehow got together to make American capitalism and the fifth freedom [economic freedom^2 ] of the businessman supreme throughout all the world...."
p. 229: "It is this triple attack on our liberties which we must resist, if communication is to have the scope that it properly deserves as the central phenomenon of society, and if the human individual is to reach and to maintain his full stature. It is again the American worship of know-how as opposed to know-what that hampers us."
Mathematician and philosopher Russell
Will this happen?
Russell (1952), pp. 65-66:^3
"It is to be expected that advances in physiology and psychology will give governments much more control over individual mentality than they now have even in totalitarian countries. Fichte laid it down that education should aim at destroying free-will, so that, after pupils have left school, they shall be incapable, throughout the rest of their lives, of thinking or acting otherwise than as their schoolmasters would have wished. But in his day this was an unattainable ideal: what he regarded as the best system in existence produced Karl Marx. In [the] future such failures are not likely to occur where there is dictatorship. Diet, injections, and injunctions will combine, from a very early age, to produce the sort of character and the sort of beliefs that the authorities consider desirable, and any serious criticism of the powers that be will become psychologically impossible. Even if all are miserable, all will believe themselves happy, because the government will tell them that they are so."
Kaczynski says similar things throughout his `manifesto'.
Philosopher Horesh
Is this really happening already?
Horesh (2020), p. 158:^4
"Meanwhile, a previously unimaginable level of thought control is fast being made accessible for every middle-income autocracy that chooses to use it. Visit the wrong website and your social credit score declines, look up the wrong book and it drops further, mention the wrong phrases on social media and it sinks so low that alarms go off in the camera rooms when your face flashes on the screen. The opportunities this presents for behavioral modification are simply astonishing, as the exploration of every forbidden idea or acquaintance can be made part of a social credit score, whose every drop causes another shock in the hearts of the lowly ranked.... Yet, whether or not China goes so far, they have developed the tools needed to implement a security regime more totalitarian than even that of the East German Stasi, at a fraction of the effort and far lower cost, for any autocrat who chooses to go that far. Russians and Turks, Poles and Hungarians, could soon find themselves entering a vise from which they never escape. For once such a security regime is implemented, resistance can be shut down in ways not previously imagined, while independent thinking is gradually snuffed out."
Mathematician and terrorist Kaczynski
Are these the only possible conclusions of industrial society?
(Try to forget that Kaczynski killed three people and ruined many more lives. His vision of the future is quoted by many because it is nuanced and sharply observed; it is worth salvaging from the wreckage of his life.)
Kaczynski & Skrbina (2010), pp. 93-94:^5
"172. First let us postulate that the computer scientists succeed in developing intelligent machines that can do all things better than human beings can do them. In that case presumably all work will be done by vast, highly organized systems of machines and no human effort will be necessary. Either of two cases might occur. The machines might be permitted to make all of their own decisions without human oversight, or else human control over the machines might be retained.
"173. If the machines are permitted to make all their own decisions we can't make any conjecture as to the results, because it is impossible to guess how such machines might behave. We only point out that the fate of the human race would be at the mercy of the machines. It might be argued that the human race would never be foolish enough to hand over all power to the machines. But we are suggesting neither that the human race would voluntarily turn power over to the machines nor that the machines would will fully seize power. What we do suggest is that the human race might easily permit itself to drift into a position of such dependence on the machines that it would have no practical choice but to accept all of the machines' decisions. As society and the problems that face it become more and more complex and as machines become more and more intelligent, people will let machines make more and more of their decisions for them, simply because machine-made decisions will bring better results than man-made ones. Eventually a stage may be reached at which the decisions necessary to keep the system running will be so complex that human beings will be incapable of making them intelligently. At that stage the machines will be in effective control. People won't be able to just turn the machines off, because they will be so dependent on them that turning them off would amount to suicide.
"174. On the other hand it is possible that human control over the machines may be retained. In that case the average man may have control over certain private machines of his own, such as his car or his personal computer, but control over large systems of machines will be in the hands of a tiny elite--just as it is today, but with two differences. Due to improved techniques the elite will have greater control over the masses; and because human work will no longer be necessary the masses will be superfluous, a useless burden on the system. If the elite is ruthless they may simply decide to exterminate the mass of humanity. If they are humane they may use propaganda or other psychological or biological techniques to reduce the birth rate until the mass of humanity becomes extinct, leaving the world to the elite. Or, if the elite consist of soft-hearted liberals, they may decide to play the role of good shepherds to the rest of the human race. They will see to it that everyone's physical needs are satisfied, that all children are raised under psychologically hygienic conditions, that everyone has a wholesome hobby to keep him busy, and that anyone who may become dissatisfied undergoes `treatment' to cure his `problem.' Of course, life will be so purposeless that people will have to be biologically or psychologically engineered either to remove their need for the power process or to make them `sublimate' their drive for power into some harmless hobby. These engineered human beings may be happy in such a society, but they most certainly will not be free. They will have been reduced to the status of domestic animals."
Philosopher Bostrom
So far we have heard about losing power and freedom to machines or their controllers. Now we hear about what preventing (or trying to prevent) such losses might look like.
To secure ourselves against civilization-ending new technologies, would we accept the following? Would it work?
Bostrom (2019), pp. 465-466:
"For a picture of what a really intensive level of surveillance could look like, consider the following vignette:
"High-tech Panopticon
"Everybody is fitted with a `freedom tag'--a sequent to the more limited wearable surveillance devices familiar today, such as the ankle tag used in several countries as a prison alternative, the bodycams worn by many police forces, the pocket trackers and wristbands that some parents use to keep track of their children, and, of course, the ubiquitous cell phone (which has been characterized as `a personal tracking device that can also be used to make calls'). The freedom tag is a slightly more advanced appliance, worn around the neck and bedecked with multidirectional cameras and microphones. Encrypted video and audio is continuously uploaded from the device to the cloud and machine-interpreted in real time. AI algorithms classify the activities of the wearer, his hand movements, nearby objects, and other situational cues. If suspicious activity is detected, the feed is relayed to one of several patriot monitoring stations. These are vast office complexes, staffed 24/7. There, a freedom officer reviews the video feed on several screens and listens to the audio in headphones. The freedom officer then determines an appropriate action, such as contacting the tag-wearer via an audiolink to ask for explanations or to request a better view. The freedom officer can also dispatch an inspector, a police rapid response unit, or a drone to investigate further. In the small fraction of cases where the wearer refuses to desist from the proscribed activity after repeated warnings, an arrest may be made or other suitable penalties imposed. Citizens are not permitted to remove the freedom tag, except while they are in environments that have been outfitted with adequate external sensors (which however includes most indoor environments and motor vehicles). The system offers fairly sophisticated privacy protections, such as automated blurring of intimate body parts, and it provides the option to redact identity-revealing data such as faces and name tags and release it only when the information is needed for an investigation. Both AI-enabled mechanisms and human oversight closely monitor all the actions of the freedom officers to prevent abuse."
_
References
Bostrom, N. (2019). The vulnerable world hypothesis. Global Policy, 10(4), 455-476. Nov. 2019. Citations are from Bostrom's website copy: https://nickbostrom.com/papers/vulnerable.pdf Retrieved 24th Mar. 2020.
Brockman, J. (Ed.) (2019). Possible Minds: Twenty-Five Ways of Looking at AI. Penguin. ISBN: 978-0525557999. Searches: https://www.amazon.com/s?k=978-0525557999 https://www.google.com/search?q=isbn+978-0525557999 https://lccn.loc.gov/2018032888
Horesh, T. (2020). The Fascism this Time: and the Global Future of Democracy. Cosmopolis Press, Kindle ed. ISBN: 0578732939. Searches: https://www.amazon.com/s?k=0578732939 https://www.google.com/search?q=isbn+0578732939
Kaczynski, T. J., & Skrbina, D. (2010). Technological Slavery: The Collected Writings of Theodore J. Kaczynski. Feral House. No ISBN. https://archive.org/details/TechnologicalSlaveryTheCollectedWritingsOfTheodoreJ.KaczynskiA.k.a.TheUnabomber/page/n91/mode/2up Retrieved 11 Jan. 2023.
Kurzweil, R. (1999). The Age of Spiritual Machines: When Computers Exceed Human Intelligence. Penguin Books. ISBN: 0140282025. Searches: https://www.amazon.com/s?k=0140282025 https://www.google.com/search?q=isbn+0140282025 https://lccn.loc.gov/98038804
Retraice (2022/11/13). Re49: China is Not F-ing Around. retraice.com. https://www.retraice.com/segments/re49 Retrieved 15th Nov. 2022.
Russell, B. (1952). The Impact Of Science On Society. George Allen and Unwin Ltd. No ISBN. https://archive.org/details/impactofscienceo0000unse_t0h6 Retrieved 15th, Nov. 2022. Searches: https://www.amazon.com/s?k=The+Impact+Of+Science+On+Society+Bertrand+Russell https://www.google.com/search?q=The+Impact+Of+Science+On+Society+Bertrand+Russell https://lccn.loc.gov/52014878
Wiener, N. (1954). The Human Use Of Human Beings: Cybernetics and Society. Da Capo, 2nd ed. ISBN: 978-0306803208. This 1954 ed. missing `The Voices of Rigidity' chapter of the original 1950 ed. See 1st ed.: https://archive.org/details/humanuseofhumanb00wien/page/n11/mode/2up. See also Brockman (2019) p. xviii. Searches for the 2nd ed.: https://www.amazon.com/s?k=9780306803208 https://www.google.com/search?q=isbn+9780306803208 https://lccn.loc.gov/87037102
Footnotes
^1 The following are excerpts from the 1950 edition, within the later-removed chapter Voices of Rigidity. See References for a hyperlink.
^2 https://en.wikipedia.org/wiki/Fifth_Freedom
^3 Previously quoted, in part, in Re49 (Retraice (2022/11/13)).
^4 Previously quoted in Re49 (Retraice (2022/11/13)).
^5 Also quoted in Kurzweil (1999) pp. 179-180.
(The below text version of the notes is for search purposes and convenience. See the PDF version for proper formatting such as bold, italics, etc., and graphics where applicable. Copyright: 2023 Retraice, Inc.)
Re113: Uncertainty, Fear and Consent (Technological Danger, Part 1)
retraice.com
Beliefs, and the feelings they cause, determine what chances we take; but possibilities don't care about our beliefs. A prediction about safety, security and freedom; decisions about two problems of life and the problem of death; uncertainty, history, genes and survival machines; technology to control the environment of technology; beliefs and feelings; taking chances; prerequisites for action; imagining possibilities; beliefs that do or don't lead to consent; policing, governance and motivations.
Air date: Wednesday, 11th Jan. 2023, 10:00 PM Eastern/US.
Prediction: freedom is going to decrease
The freedom-security-safety tradeoff will continue to shift toward safety and security.
Over the next 20 years, 2023-2032, you'll continue to be asked, told, and nudged into giving up freedom in exchange for safety (which is about unintentional danger), in addition to security (which is about intentional danger).^1
(Side note: We have no particular leaning, one way or another, about whether this will be a good or bad thing overall. Frame it one way, and we yearn for freedom; frame it another way, and we crave protection from doom.)
For more on this, consider: o Wiener (1954); o Russell (1952); o Dyson (1997), Dyson (2020); o Butler (1863); o Kurzweil (1999); o Kaczynski & Skrbina (2010); o Bostrom (2011), Bostrom (2019).
Decisions: two problems of life and the problem of death
First introduced in Re27 (Retraice (2022/10/23)) and integrated in Re31 (Retraice (2022/10/27)).
Two problems of life:
1. To change the world?
2. To change oneself (that part of the world)?
Problem of death:
1. Dead things rarely become alive, whereas alive things regularly become dead. What to do?
Uncertainty
We just don't know much about the future, but we talk and write within the confines of our memories and instincts.
We know the Earth-5k well via written history, and our bodies `know', via genes, the Earth-2bya, about the time that replication and biology started. But the parts of our bodies that know it (genes, mechanisms shared with other animals), are what would reliably survive, not us. Most of our genes can survive in other survival machines, because we share so much DNA with other creatures.^2
But there is hope in controlling the environment to protect ourselves (vital technology), though we also like to enjoy ourselves (other technology). There is also irony in it, to the extent that technology itself is the force from which we may need to be protected.
Beliefs and feelings
* a cure, hope; * no cure, fear; * a spaceship, excitement; * home is the same, longing; * home is not the same, sadness; * she loves me, happiness; * she hates me, misery; * she picks her nose, disgust.
Chances
Even getting out of bed--or not--is somewhat risky: undoubtedly some human somewhere has died by getting out of bed and falling; but people in hospitals have to get out of bed to avoid skin and motor problems.
We do or don't get out of bed based on instincts and beliefs.
Side note: von Mises' three prerequisites for human action:^3
1. Uneasiness (with the present);
2. An image (of a desirable future);
3. The belief (expectation) that action has the power to yield the image. (Side note: technology in the form of AI is becoming more necessary to achieve desirable futures, because enough humans have been picking low-hanging fruit for enough time that most of the fruit is now high-hanging, where we can't reach without AI.)
Possibilities
* radically good future because of technology (cure for everything); * radically bad future because of technology (synthetic plague); * radically good future because of humans (doctors invent cure); * radically bad future because of humans (doctors invent synthetic plague).
The important point is to remember the venn: there is a large space of possibilities, within which a small dot is what any individual human can imagine.
If you believe x, do you consent to y?
* no one has privacy, privacy invasion; * entity e is not malicious, open interaction with entity e; * VWH (the vulnerable world hypothesis), global police state.
"VWH: If technological development continues then a set of capabilities will at some point be attained that make the devastation of civilization extremely likely, unless civilization sufficiently exits the semi- anarchic default condition."^4
The "the semi-anarchic default condition":
1. limited capacity for preventive policing;
2. limited capacity for global governance;
3. diverse motivations: "There is a wide and recognizably human distribution of motives represented by a large population of actors (at both the individual and state level) - in particular, there are many actors motivated, to a substantial degree, by perceived self-interest (e.g. money, power, status, comfort and convenience) and there are some actors (`the apocalyptic residual') who would act in ways that destroy civilization even at high cost to themselves."^5
_
References
Bostrom, N. (2011). Information Hazards: A Typology of Potential Harms from Knowledge. Review of Contemporary Philosophy, 10, 44-79. Citations are from Bostrom's website copy: https://www.nickbostrom.com/information-hazards.pdf Retrieved 9th Sep. 2020.
Bostrom, N. (2019). The Vulnerable World Hypothesis. Global Policy, 10(4), 455-476. Nov. 2019. Citations are from Bostrom's website copy: https://nickbostrom.com/papers/vulnerable.pdf Retrieved 24th Mar. 2020.
Brockman, J. (Ed.) (2019). Possible Minds: Twenty-Five Ways of Looking at AI. Penguin. ISBN: 978-0525557999. Searches: https://www.amazon.com/s?k=978-0525557999 https://www.google.com/search?q=isbn+978-0525557999 https://lccn.loc.gov/2018032888
Butler, S. (1863). Darwin among the machines. The Press (Canterbury, New Zealand). Reprinted in Butler et al. (1923).
Butler, S., Jones, H., & Bartholomew, A. (1923). The Shrewsbury Edition of the Works of Samuel Butler Vol. 1. J. Cape. No ISBN. https://books.google.com/books?id=B-LQAAAAMAAJ Retrieved 27th Oct. 2020.
Dawkins, R. (2016). The Selfish Gene. Oxford, 40th anniv. ed. ISBN: 978-0198788607. Searches: https://www.amazon.com/s?k=9780198788607 https://www.google.com/search?q=isbn+9780198788607 https://lccn.loc.gov/2016933210
Dyson, G. (2020). Analogia: The Emergence of Technology Beyond Programmable Control. Farrar, Straus and Giroux. ISBN: 978-0374104863. Searches: https://www.amazon.com/s?k=9780374104863 https://www.google.com/search?q=isbn+9780374104863 https://catalog.loc.gov/vwebv/search?searchArg=9780374104863
Dyson, G. B. (1997). Darwin Among The Machines: The Evolution Of Global Intelligence. Basic Books. ISBN: 978-0465031627. Searches: https://www.amazon.com/s?k=978-0465031627 https://www.google.com/search?q=isbn+978-0465031627 https://lccn.loc.gov/2012943208
Kaczynski, T. J., & Skrbina, D. (2010). Technological Slavery: The Collected Writings of Theodore J. Kaczynski. Feral House. No ISBN. https://archive.org/details/TechnologicalSlaveryTheCollectedWritingsOfTheodoreJ.KaczynskiA.k.a.TheUnabomber/page/n91/mode/2up Retrieved 11 Jan. 2023.
Koch, C. G. (2007). The Science of Success. Wiley. ISBN: 978-0470139882. Searches: https://www.amazon.com/s?k=9780470139882 https://www.google.com/search?q=isbn+9780470139882 https://lccn.loc.gov/2007295977
Kurzweil, R. (1999). The Age of Spiritual Machines: When Computers Exceed Human Intelligence. Penguin Books. ISBN: 0140282025. Searches: https://www.amazon.com/s?k=0140282025 https://www.google.com/search?q=isbn+0140282025 https://lccn.loc.gov/98038804
Retraice (2022/10/23). Re27: Now That's a World Model - WM4. retraice.com. https://www.retraice.com/segments/re27 Retrieved 24th Oct. 2022.
Retraice (2022/10/27). Re31: What's Happening That Matters - WM5. retraice.com. https://www.retraice.com/segments/re31 Retrieved 28th Oct. 2022.
Retraice (2022/11/27). Re63: Seventeen Reasons to Learn AI. retraice.com. https://www.retraice.com/segments/re63 Retrieved Monday Nov. 2022.
Russell, B. (1952). The Impact Of Science On Society. George Allen and Unwin Ltd. No ISBN. https://archive.org/details/impactofscienceo0000unse_t0h6 Retrieved 15th, Nov. 2022. Searches: https://www.amazon.com/s?k=The+Impact+Of+Science+On+Society+Bertrand+Russell https://www.google.com/search?q=The+Impact+Of+Science+On+Society+Bertrand+Russell https://lccn.loc.gov/52014878
Schneier, B. (2003). Beyond Fear: Thinking Sensibly About Security in an Uncertain World. Copernicus Books. ISBN: 0387026207. Searches: https://www.amazon.com/s?k=0387026207 https://www.google.com/search?q=isbn+0387026207 https://lccn.loc.gov/2003051488 Similar edition available at: https://archive.org/details/beyondfearthinki00schn_0
von Mises, L. (1949). Human Action: A Treatise on Economics. Ludwig von Mises Institute, 2010 reprint ed. ISBN: 978-1610161459. Searches: https://www.amazon.com/s?k=9781610161459 https://www.google.com/search?q=isbn+9781610161459 https://lccn.loc.gov/50002445
Wiener, N. (1954). The Human Use Of Human Beings: Cybernetics and Society. Da Capo, 2nd ed. ISBN: 978-0306803208. This 1954 ed. missing `The Voices of Rigidity' chapter of the original 1950 ed. See 1st ed.: https://archive.org/details/humanuseofhumanb00wien/page/n11/mode/2up. See also Brockman (2019) p. xviii. Searches for the 2nd ed.: https://www.amazon.com/s?k=9780306803208 https://www.google.com/search?q=isbn+9780306803208 https://lccn.loc.gov/87037102
Footnotes
^1 Schneier (2003) pp. 12, 52.
^2 On creatures as gene (replicator) `survival machines', see Dawkins (2016) pp. 24-25, 30.
^3 von Mises (1949) pp. 13-14. See also Koch (2007) p. 144. See also Retraice (2022/11/27).
^4 Bostrom (2019) p. 457.
^5 Bostrom (2019) pp. 457-458.
(The below text version of the notes is for search purposes and convenience. See the PDF version for proper formatting such as bold, italics, etc., and graphics where applicable. Copyright: 2023 Retraice, Inc.)
Re112: The Attention Hazard and The Attention (Distraction) Economy
retraice.com
Drawing attention to dangerous information can increase risk, but the attention economy tends to draw attention toward amusement. Information hazards; formats include data, idea, attention, template, `signaling' and `evocation'; increasing the number of information locations; adversaries, agents, search, heuristics; the dilemma of attention; suppressing secrets; the Streisand effect; the attention economy as elite `solution'; Liu's `wall facers'.
Air date: Tuesday, 10th Jan. 2023, 10:00 PM Eastern/US.
Attention hazard of information
Bostrom (2011): "Information hazard: A risk that arises from the dissemination or the potential dissemination of (true) information that may cause harm or enable some agent to cause harm."^1
Attention is one format (or `mode') of information transfer:^2
"Attention hazard: The mere drawing of attention to some particularly potent or relevant ideas or data increases risk, even when these ideas or data are already `known'."^3
This increase is because `attention' is physically increasing the number of locations where the hazard data or idea are instantiated.
Adversaries and agents
"Because there are countless avenues for doing harm, an adversary faces a vast search task in finding out which avenue is most likely to achieve his goals. Drawing the adversary's attention to a subset of especially potent avenues can greatly facilitate the search. For example, if we focus our concern and our discourse on the challenge of defending against viral attacks, this may signal to an adversary that viral weapons--as distinct from, say, conventional explosives or chemical weapons--constitute an especially promising domain in which to search for destructive applications. The better we manage to focus our defensive deliberations on our greatest vulnerabilities, the more useful our conclusions may be to a potential adversary."^4
Consider the parallels in Russell & Norvig (2020): * `adversarial search and games' (chpt. 5); * `intelligent agents' (chpt 2); * `solving problems by searching' (chpt. 3); * drawing attention can facilitate search: heuristics (sections 3.5, 3.6);
The dilemma: We focus on risk, and also lead adversary-agents to our vulnerabilities.
Cf. the `vulnerable world hypothesis'^5 on the policy implications of unrestrained technological innovation given the unknown risk of self-destructing innovators.
"Still, one likes to believe that, on balance, investigations into existential risks and most other risk areas will tend to reduce rather than increase the risks of their subject matter."^6
Secrets and suppression
"Clumsy attempts to suppress discussion often backfire. An adversary who discovers an attempt to conceal an idea may infer that the idea could be of great value. Secrets have a special allure."^7
https://en.wikipedia.org/wiki/Streisand_effect: "[T]he way attempts to hide, remove, or censor information can lead to the unintended consequence of increasing awareness of that information."
The attention (distraction) economy
Might the attention economy, one day or even already, be a `solution' (an elite solution) to the attention hazard? Would it work against AI? Or buy us time? What about `wall facers'?^8
Cf. Re30, Retraice (2022/10/26), on things being done.
_
References
Bostrom, N. (2011). Information Hazards: A Typology of Potential Harms from Knowledge. Review of Contemporary Philosophy, 10, 44-79. Citations are from Bostrom's website copy: https://www.nickbostrom.com/information-hazards.pdf Retrieved 9th Sep. 2020.
Bostrom, N. (2019). The vulnerable world hypothesis. Global Policy, 10(4), 455-476. Nov. 2019. Citations are from Bostrom's website copy: https://nickbostrom.com/papers/vulnerable.pdf Retrieved 24th Mar. 2020.
Liu, C. (2016). The Dark Forest. Tor Books. ISBN: 978-0765386694. Searches: https://www.amazon.com/s?k=9780765386694 https://www.google.com/search?q=isbn+9780765386694 https://lccn.loc.gov/2015016174
Retraice (2022/10/26). Re30: AI Progress and Surrender. retraice.com. https://www.retraice.com/segments/re30 Retrieved 27th Oct. 2022.
Russell, S., & Norvig, P. (2020). Artificial Intelligence: A Modern Approach. Pearson, 4th ed. ISBN: 978-0134610993. Searches: https://www.amazon.com/s?k=978-0134610993 https://www.google.com/search?q=isbn+978-0134610993 https://lccn.loc.gov/2019047498
Footnotes
^1 Bostrom (2011) p. 2.
^2 The others he distinguishes are: data, idea, template, `signaling' and `evocation'.
^3 Bostrom (2011) p. 3.
^4 Bostrom (2011) p. 3.
^5 Bostrom (2019).
^6 Bostrom (2011) p. 4.
^7 Bostrom (2011) p. 3.
^8 Liu (2016).
(The below text version of the notes is for search purposes and convenience. See the PDF version for proper formatting such as bold, italics, etc., and graphics where applicable. Copyright: 2023 Retraice, Inc.)
Re111: AI and the Gorilla Problem
retraice.com
Russell and Norvig say it's natural to worry that AI will destroy us, and that the solution is good design that preserves our control. Our unlucky evolutionary siblings, the gorillas; humans the next gorillas; giving up the benefits of AI; the standard model and the human compatible model; design implications of human compatibility; the difficulty of human preferences.
Air date: Monday, 9th Jan. 2023, 10:00 PM Eastern/US.
The gorilla problem
Added to Re109 notes after live:
"the gorilla problem: about seven million years ago, a now-extinct primate evolved, with one branch leading to gorillas and one to humans. Today, the gorillas are not too happy about the human branch; they have essentially no control over their future. If this is the result of success in creating superhuman AI--that humans cede control over their future--then perhaps we should stop work on AI, and, as a corollary, give up the benefits it might bring. This is the essence of Turing's warning: it is not obvious that we can control machines that are more intelligent than us."^1
We might add that there are worse fates than death and zoos.
Most of the book, they say, reflects the majority of work done in AI to date--within `the standard model', i.e. AI systems are `good' when they do what they're told, which is a problem because `telling' preferences is easy to get wrong. (p. 4)
Solution: uncertainty in the purpose (the `human compatible' model^2), which has design implications (p. 34): * chpt. 16: a machine's incentive to allow shut-off follows from uncertainty about the human objective; * chpt. 18: assistance games are the mathematics of humans and machines working together; * chpt. 22: inverse reinforcement learning is how machines can learn about human preferences by observation of their choices; * chpt. 27: problem 1 of N, our choices depend on preferences that are hard to invert; problem 2 of N, preferences vary by individual and over time.
The human problem
But how do we ensure that AI engineers don't use the dangerous standard model? And if AI becomes easier and easier to use, as technology tends to do, how do we ensure that no one uses the standard model? How do we ensure that no one does any particular thing?
The `human compatible' model indicates that the `artificial flight' version of AI (p. 2), which is what we want, is possible. It does not indicate that it is probable. And even to make it probable would still not make the standard model improbable. Nuclear power plants don't make nuclear weapons' use less probable. This is the more general problem taken up by Bostrom (2011) and Bostrom (2019).
_
References
Bostrom, N. (2011). Information Hazards: A Typology of Potential Harms from Knowledge. Review of Contemporary Philosophy, 10, 44-79. Citations are from Bostrom's website copy: https://www.nickbostrom.com/information-hazards.pdfRetrieved 9th Sep. 2020.
Bostrom, N. (2019). The vulnerable world hypothesis. Global Policy, 10(4), 455-476. Nov. 2019. Citations are from Bostrom's website copy: https://nickbostrom.com/papers/vulnerable.pdfRetrieved 24th Mar. 2020.
Russell, S. (2019). Human Compatible: Artificial Intelligence and the Problem of Control. Viking. ISBN: 978-0525558613. Searches: https://www.amazon.com/s?k=978-0525558613 https://www.google.com/search?q=isbn+978-0525558613 https://lccn.loc.gov/2019029688
Russell, S., & Norvig, P. (2020). Artificial Intelligence: A Modern Approach. Pearson, 4th ed. ISBN: 978-0134610993. Searches: https://www.amazon.com/s?k=978-0134610993 https://www.google.com/search?q=isbn+978-0134610993 https://lccn.loc.gov/2019047498
Footnotes
^1 Russell & Norvig (2020) p. 33.
^2 Russell (2019).
(The below text version of the notes is for search purposes and convenience. See the PDF version for proper formatting such as bold, italics, etc., and graphics where applicable. Copyright: 2023 Retraice, Inc.)
Re110: TikTok for Addicting the World's Kids
retraice.com
Tristan Harris's analysis of China's TikTok vs. the exported version. Tristan Harris on TikTok; spinach TikTok for Chinese kids, opium for everyone else; the Opium Wars and the `Century of Humiliation'; TikTok content and time limits for Chinese kids; Netflix on the attention economy vs. sleep; Russia and China trying to radicalize U.S. veterans via social media; war and civil war.
Air date: Sunday, 8th Jan. 2023, 10:00 PM Eastern/US.
This is a follow-up to Re109, Retraice (2023/01/07), where we described TitTok as a tool for Chinese spying. It's worse than that.
Tristan Harris is co-founder of the Center for Humane Technology, worked as a design ethicist at Google, and studied computer science at Stanford.^1
Tristan Harris on 60 Minutes, 2022:
"It's almost like [Chinese company Bytedance] recognize[s] that technology [is] influencing kids' development, and [so] they make their domestic version a spinach TikTok, while they ship the opium version to the rest of the world."^2
Cf. Re48, Retraice (2022/11/12), on the Opium Wars and the `century of humiliation [of China]', a Chinese term.
TikTok in China, if you're under 14 years old:^3 * science experiments * museum exhibits * patriotism * educational content * limited to 40min per day * mandatory 5 sec delay now and then * opening and closing hours
Harris on Joe Rogan, 2021
"It's like Xi saw The Social Dilemma [and so enacted changes to protect only China's kids]."
On the attention economy more broadly: "Even Netflix said their biggest competitor is sleep, because they're all competing for attention."^4
In the same episode, Harris says Russia and China try to radicalize U.S. veteran's groups on social media, to increase the likelihood of such tactically trained people joining or starting civil war. Cf. Re17, Retraice (2022/03/07), on both war with China and U.S. civil war.
_
References
Retraice (2022/03/07). Re17: Hypotheses to Eleven. retraice.com. https://www.retraice.com/segments/re17 Retrieved 17th Mar. 2022.
Retraice (2022/11/12). Re48: From Drugs to Mao to Money. retraice.com. https://www.retraice.com/segments/re48 Retrieved 14th Nov. 2022.
Retraice (2023/01/07). Re109: TikTok (app), Tik-Tok (novel), and Low-Power Mode (Day 7, AIMA4e Chpt. 7). retraice.com. https://www.retraice.com/segments/re109 Retrieved 8th Jan. 2023.
Footnotes
^1 https://en.wikipedia.org/wiki/Tristan_Harris.
^2 TikTok in China versus the United States -- 60 Minutes, Nov. 8, 2022. Available on YouTube: https://www.youtube.com/watch?v=0j0xzuh-6rY
^3 Some of these items and the following quotes are from Tristan Harris on Joe Rogan #1736, 2021. Clip available at: What China's Crackdown on Algorithm's Means for the US, Nov. 18, 2021.
^4 https://youtu.be/im4O2sW3FiY?t=210
(The below text version of the notes is for search purposes and convenience. See the PDF version for proper formatting such as bold, italics, etc., and graphics where applicable. Copyright: 2023 Retraice, Inc.)
Re109: TikTok (app), Tik-Tok (novel), and Low-Power Mode (Day 7, AIMA4e Chpt. 7)
retraice.com
An observation of AI in action (TikTok), a decision (Low-Power Mode), and a coincidence (Tik-Tok). TikTok as addictive spying tool; Tik-Tok, the novel; changes in technology vs. lack of changes in human wants and needs; creeping totalitarianism, illiberty, war, climate change, Artilect War, superintelligence; the gorilla problem; making a living, making a difference; AIMA4e, Retraice, audience; low-power mode.
Air date: Saturday, 7th Jan. 2023, 10:00 PM Eastern/US.
Prediction: default doom
Consider TikTok (the app), built on AI, ultimately controlled by the Chinese Communist Party,^1 on which millions of Americans have been made addicted to pure amusement, and Tik-Tok (the novel), yet another warning about the bleakness of a robot's would-be life, and the robot's power to respond.
It seems the ever-increasing power of technology is not being tracked by any obvious change in human desires.^2 If so, it's reasonable to be pessimistic and expect that worse forms of previous bad things will happen because stronger technology makes them possible:^3 o Creeping totalitarianism, illiberty: See, for example: Strittmatter (2018); Andersen (2020). o Normal war: Add, for example, `slaughterbots'^4 to the otherwise familiar current methods of war. o Climate change: The generalized doom scenario is that we can't adapt quickly enough to the changes we're causing, by use of technologies, in the environment (changes that go beyond just average temperatures)--see H6 of the hypotheses in Re17, Retraice (2022/03/07). o Artilect War: A `gigadeath' conflict between two human groups who anticipate AI surpassing human abilities. One group is in favor (cosmists), the other opposed (terrans). de Garis (2005). o Superintelligence: Bostrom (2014). I.e. super-human AI with its own purposes, causing what Russell & Norvig (2020) call "the gorilla problem: about seven million years ago, a now-extinct primate evolved, with one branch leading to gorillas and one to humans. Today, the gorillas are not too happy about the human branch; they have essentially no control over their future. If this is the result of success in creating superhuman AI--that humans cede control over their future--then perhaps we should stop work on AI, and, as a corollary, give up the benefits it might bring. This is the essence of Turing's warning: it is not obvious that we can control machines that are more intelligent than us."^5 We might add that there are worse fates than death and zoos.
Preferences: competing goals
* making a living; * making a difference--to us, working to decrease the likelihood of the above `doom' scenarios.^6
Retraice was meant to make a living and a difference. It's doing neither, and only has hope of doing one (difference).
Two things are obvious at this point:
1. Continuing with Russell & Norvig (2020) (daily investing even more time) is more likely to make a difference and a living.
2. If Retraice has an audience out there, we have no way of finding it--and it's much smaller than we thought it would be.
It also seems clear that completely stopping Retraice is wrong, because we like doing it. And it still has a chance of making a difference, given enough time and luck.
Decision: low-power mode
The new Retraice plan: * Time on AIMA4e: more; * Time on podcast: less (something like changing from daily `podcast' to short daily `transmission'); * Money on podcast: less (the equivalent of keeping one light bulb on, the bare minimum in costs and expenses).
__
References
Andersen, R. (2020). The panopticon is already here. The Atlantic. Sep. 2020. https://www.theatlantic.com/magazine/archive/2020/09/china-ai-surveillance/614197/ Retrieved 8th Nov. 2022.
Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford. First published in 2014. Citations are from the pbk. edition, 2016. ISBN: 978-0198739838. Searches: https://www.amazon.com/s?k=978-0198739838 https://www.google.com/search?q=isbn+978-0198739838 https://lccn.loc.gov/2015956648
Bostrom, N., & Cirkovic, M. M. (Eds.) (2008). Global Catastrophic Risks. Oxford University Press. ISBN: 978-0199606504. Searches: https://www.amazon.com/s?k=978-0199606504 https://www.google.com/search?q=isbn+978-0199606504 https://lccn.loc.gov/2008006539
de Garis, H. (2005). The Artilect War: Cosmists vs. Terrans: A Bitter Controversy Concerning Whether Humanity Should Build Godlike Massively Intelligent Machines. ETC Publications. ISBN: 0882801546. Searches: https://www.amazon.com/s?k=0882801546 https://www.google.com/search?q=isbn+0882801546
Durant, W., & Durant, A. (1968). The Lessons of History. Simon and Schuster. No ISBN. Searches: https://www.amazon.com/s?k=lessons+of+history+durant https://www.google.com/search?q=lessons+of+history+durant https://lccn.loc.gov/68019949
Retraice (2022/03/07). Re17: Hypotheses to Eleven. retraice.com. https://www.retraice.com/segments/re17 Retrieved 17th Mar. 2022.
Retraice (2022/12/31). Re102: AI For What. retraice.com. https://www.retraice.com/segments/re102 Retrieved 1st Jan. 2023.
Russell, S., & Norvig, P. (2020). Artificial Intelligence: A Modern Approach. Pearson, 4th ed. ISBN: 978-0134610993. Searches: https://www.amazon.com/s?k=978-0134610993 https://www.google.com/search?q=isbn+978-0134610993 https://lccn.loc.gov/2019047498
Simler, K., & Hanson, R. (2018). The Elephant in the Brain: Hidden Motives in Everyday Life. Oxford University Press. ISBN: 9780190495992. Searches: https://www.amazon.com/s?k=9780190495992 https://www.google.com/search?q=isbn+9780190495992 https://lccn.loc.gov/2017004296
Stephens-Davidowitz, S. (2018). Everybody Lies: Big Data, New Data, and What the Internet Can Tell Us About Who We Really Are. Dey Street Books. ISBN: 978-0062390868. Searches: https://www.amazon.com/s?k=9780062390868 https://www.google.com/search?q=isbn+9780062390868 https://lccn.loc.gov/2017297094
Strittmatter, K. (2018). We Have Been Harmonized: Life in China's Surveillance State. Custom House, revised, updated ed. ISBN: 978-0063027305. Published in Germany, 2018. This paperback edition 2021. Searches: https://www.amazon.com/s?k=9780063027305 https://www.google.com/search?q=isbn+9780063027305 https://lccn.loc.gov/2020288922
Footnotes
^1 "TikTok's Chinese parent company, ByteDance, is required by Chinese law to make the app's data available to the Chinese Communist Party (CCP). From the FBI Director to FCC Commissioners to cybersecurity experts, everyone has made clear the risk of TikTok being used to spy on Americans." Rubio, Gallagher Introduce Bipartisan Legislation to Ban TikTok, Dec. 13th, 2022.
^2 Durant & Durant (1968) p. 95: "Since we have admitted no substantial change in man's nature during historic times, all technological advances will have to be written off as merely new means of achieving old ends--the acquisition of goods, the pursuit of one sex by the other (or by the same), the overcoming of competition, the fighting of wars. One of the discouraging discoveries of our disillusioning century is that science is neutral: it will kill for us as readily as it will heal, and will destroy for us more readily than it can build." Cf. Simler & Hanson (2018), Stephens-Davidowitz (2018).
^3 One or more of these might be "a Great Filter--an evolutionary step that is extremely improbably--somewhere on the line between Earth-like planet and colonizing-in-detectable-ways civilization." Bostrom & Cirkovic (2008) pp. 131-132, citing Hanson (1999), which is probably the same as this (1998): The Great Filter - Are We Almost Past It? Robin Hanson, Sep. 15, 1998.
^4 "The video was released onto YouTube by the Future of Life Institute and Stuart Russell [co-author of Russell & Norvig (2020)]." --https://en.wikipedia.org/wiki/Slaughterbots. The video: https://www.youtube.com/watch?v=9CO6M2HsoIA.
^5 Russell & Norvig (2020) p. 33.
^6 We use this abbreviation of our mission statement: "FindTFtMtFBttPaMtCKaMbTests (find the fundamentals that make the future better than the past and make them common knowledge as measured by tests)." Cf. Retraice (2022/12/31).
(The below text version of the notes is for search purposes and convenience. See the PDF version for proper formatting such as bold, italics, etc., and graphics where applicable. Copyright: 2023 Retraice, Inc.)
Re108: Contributors and Controllers (Day 6, AIMA4e Chpt. 6)
retraice.com
A subdivision within the players of the computer control game. CSPs and factored vs. atomic representations; war in Re107; Re69's citation date; the many applications of CSPs, and contributors; contributors and controllers in computer control; nation states, sub-states, companies; politicians, spooks, military, police; shareholders, directors, executives; engineers, professors; hackers.
Air date: Friday, 6th Jan. 2023, 10:00 PM Eastern/US.
Notes on CSPs, war in Re107, and Re69's citation
Russell & Norvig (2020) chpt. 6 is about constraint satisfaction problems, which use factored representations of states in problems instead of atomic representations or structured representations.
During yesterday's livestream for Re107 (Retraice (2023/01/05)), I failed to mention the Bobby Fischer quote, "chess is war", and subsequent commentary.^1 It's in the Re107 notes.
Also, until yesterday, the citation of Re69 (Retraice (2022/12/03)) had incorrectly Nov. instead of Dec. as the month. It's fixed. You were wondering about that.
Two categories of players in computer control
The many applications of CSPs mentioned in chpt. 6, and the many people cited in the bibliography section who contributed the ideas and systems and work that made the applications (indeed, our modern world) possible, lead to an idea: ________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________
PIC A tentative venn of the major types of groups and individuals in computer control. Notice that most groups can be all-one-category or some-in-both; but hackers (black-, white- and grey-hat) can be all-in-either as well as some-in-both. An example of `sub-state' is California, which can exert a lot of influence by the size of its population. ________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________
___
References
Retraice (2022/12/03). Re69: TABLE-DRIVEN-AGENT Part 5 (ECMP and AIMA4e p. 48). retraice.com. https://www.retraice.com/segments/re69 Retrieved 4th Dec. 2022.
Retraice (2023/01/05). Re107: Three Kinds of AI (Day 5, AIMA4e Chpt. 5). retraice.com. https://www.retraice.com/segments/re107 Retrieved 6th Jan. 2023.
Russell, S., & Norvig, P. (2020). Artificial Intelligence: A Modern Approach. Pearson, 4th ed. ISBN: 978-0134610993. Searches: https://www.amazon.com/s?k=978-0134610993 https://www.google.com/search?q=isbn+978-0134610993 https://lccn.loc.gov/2019047498
Footnotes
^1 Russell & Norvig (2020) p. 168.
(The below text version of the notes is for search purposes and convenience. See the PDF version for proper formatting such as bold, italics, etc., and graphics where applicable. Copyright: 2023 Retraice, Inc.)
Re107: Three Kinds of AI (Day 5, AIMA4e Chpt. 5)
retraice.com
War, peace and commerce AI in our multi-agent world. Considering multi-agent environments as economies, adversarial games, or merely nondeterministic; commerce, peace, war; Bostrom's `capacity building' and `strategic analysis'; adversary arguments; the computer control game; solving for mobilization and war.
Air date: Thursday, 5th Jan. 2023, 10:00 PM Eastern/US.
At least three stances toward other agents
Russell & Norvig (2020) p. 148:
There are at least three stances we can take towards multi-agent environments. The first stance, appropriate when there are a very large number of agents, is to consider them in the aggregate as an economy, allowing us to do things like predict that increasing demand will cause prices to rise, without having to predict the action of any individual agent.
Second, we could consider adversarial agents as just a part of the environment--a part that makes the environment nondeterministic. But if we model the adversaries in the same way that, say, rain sometimes falls and sometimes doesn't, we miss the idea that our adversaries are actively trying to defeat us, whereas the rain supposedly has no such intention.
The third stance is to explicitly model the adversarial agents with the techniques of adversarial game-tree search. That is what this chapter covers.^1
War, peace and commerce AI
Russell & Norvig (2020) p. 168:
Bobby Fischer declared that `chess is war,' but chess lacks at least one major characteristic of real wars, namely, partial observability. In the `fog of war,' the whereabouts of enemy units is often unknown until revealed by direct contact. As a result, warfare includes the use of scouts and spies to gather information and the use of concealment and bluff to confuse the enemy.
* Commerce AI: markets and prices, supply and demand; * Peace AI: Bostrom's^2 capacity building (as long as we model the relevant environment as having one agent, and we aren't doing AOSE^3); * War AI: search changes at the point of an `adversary argument'^4 ; now we're into (the results of) Bostrom's `strategic analysis'.^5
Questions and answers
Laymen's questions that arose (mostly) before Retraice's `technical turn' at Retraice (2022/11/20) are starting to get technical answers: * Why does the CC (computer control) game feel like a (game-theoretic) game and war? (Retraice (2022/11/19)) See Russell & Norvig (2020) chpt. 4, p. 136 on `adversary arguments'; chpt. 5, p. 168 on `chess is war'. * What do players do? (Retraice (2022/11/16); Retraice (2022/11/18)) Solve. (Retraice (2023/01/03)) * What will war-AI and general mobilization look like? (Retraice (2022/12/03)) Solving, and deploying solutions to, hard game-problem-environments^6 (partially observable, multi-agent, nondeterministic, sequential, dynamic, continuous, unknown). (Russell & Norvig (2020) p. 47; cf. Retraice (2023/01/02).
__
References
Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford. First published in 2014. Citations are from the pbk. edition, 2016. ISBN: 978-0198739838. Searches: https://www.amazon.com/s?k=978-0198739838 https://www.google.com/search?q=isbn+978-0198739838 https://lccn.loc.gov/2015956648
Retraice (2022/11/16). Re52: Big Questions About AI. retraice.com. https://www.retraice.com/segments/re52 Retrieved 17th Nov. 2022.
Retraice (2022/11/18). Re54: Implications and Endgames. retraice.com. https://www.retraice.com/segments/re54 Retrieved 19th Nov. 2022.
Retraice (2022/11/19). Re55: The Computer Control Game. retraice.com. https://www.retraice.com/segments/re55 Retrieved 20th Nov. 2022.
Retraice (2022/11/20). Re56: A Valuable Brick: `Artificial Intelligence: A Modern Approach' 4th ed. retraice.com. https://www.retraice.com/segments/re56 Retrieved 21st Nov. 2022.
Retraice (2022/12/03). Re69: TABLE-DRIVEN-AGENT Part 5 (ECMP and AIMA4e p. 48). retraice.com. https://www.retraice.com/segments/re69 Retrieved 4th Dec. 2022.
Retraice (2023/01/02). Re104: Agent Functions, Agent Programs, Task Environments (Day 2, AIMA4e Chpt. 2). retraice.com. https://www.retraice.com/segments/re104 Retrieved 3rd Jan. 2023.
Retraice (2023/01/03). Re105: Solve or Be Solved (Day 3, AIMA4e Chpt. 3). retraice.com. https://www.retraice.com/segments/re105 Retrieved 4th Jan. 2023.
Russell, S., & Norvig, P. (2020). Artificial Intelligence: A Modern Approach. Pearson, 4th ed. ISBN: 978-0134610993. Searches: https://www.amazon.com/s?k=978-0134610993 https://www.google.com/search?q=isbn+978-0134610993 https://lccn.loc.gov/2019047498
Footnotes
^1 Cf. p. 168 on `war', p. 136 on `adversary arguments', p. 169. on `evasive moves'.
^2 Bostrom (2014) pp. 317 ff.
^3 Retraice (2022/12/03).
^4 Russell & Norvig (2020) p. 136.
^5 Bostrom (2014) pp. 317-317.
^6 Russell & Norvig (2020) p. 42.
The podcast currently has 147 episodes available.