
Sign up to save your podcasts
Or


For proper formatting (bold, italics, etc.) and graphics (where applicable) see the PDF version. Copyright: 2020 Retraice, Inc.
Re8: Strange MachinesA survey of the idea that technology is creatures.
Air date: Wednesday, 28th Oct. 2020, 3 : 30 PM Pacific/US.
1 The Problem We should call them something elseIf machines or artifacts have goals, we should call them creatures, or something else.
High-altitude fruitThis is just a sampling of the available ideas.
We focus on the books and papers because there's plenty of low-hanging fruit already available in electronic media—movies, documentaries, radio, TV, etc. It's hard to add homework to the already-heavy load of electronic production, so these sources don't tend to make the cut.
Simon—the rules are the sameWe can look at living systems the way we look at artifacts—as interfaces between inner- and outer-environments.1
Grey Walter's tortoisesAlong with biologically engineered organisms, some robots2 might be examples of life forms that did not, strictly speaking, evolve organically.3
Butler—war to the deathOur interests are inseparable from machines, but they're on the verge of reproduction, so we should destroy them.4
Dyson—they're not imaginarySmart computer programs that seem like creatures are very real.5
Wolfram's simple programsSimple cellular automata programs produce wild complexity.6
Yudkowsky on fire alarmsFire alarms make it socially acceptable to react to a situation that might be dangerous but not obvious. There will be no equivalent mechanism for dangerous AI.7
I. J. Good—take science fiction seriouslyIf machines can do what humans do, they can improve machines, which is themselves, which means boom.8
'unquestionably'If the argument focuses on whether it is certain that an intelligence explosion will happen, we're probably missing the point.9
Yudkowsky—smartish stuffWe shouldn't argue too much about whether something is truly intelligent; if it accomplishes dramatic things, our definitions don't matter.10
See Legg and Hutter for more technical work on the definition(s) of intelligence.11
S. Russell and Norvig—operating on their ownAI and robotics are unlike other dangerous technologies because of autonomy; we need to engineer safety, not hope for it.12
Two meanings of 'the singularity'John von Neumann gives the 'unpredictable' definition, as recollected and paraphrased by Stanislaw Ulam:
"Quite aware that the criteria of value in mathematical work are, to some extent, purely aesthetic, [von Neumann] once expressed an apprehension that the values put on abstract scientific achievement in our present civilization might diminish: 'The interests of humanity may change, the present curiosities in science may cease, and entirely different things may occupy the human mind in the future.' One conversation centered on the ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue."13
Ray Kurzweil gives the upgrade-or-bust definition, in dialogue form.
MOLLY 2004: … But seriously, I'm having trouble keeping up with all of this stuff flying at me as it is. How am I going to keep up with the pace of the Singularity? …
GEORGE 2048: … You'll be able to grasp what I'm up to if that's what you really want.
MOLLY 2004: What, by becoming…
RAY: Enhanced?
MOLLY 2004: Yes, that's what I was trying to say.
GEORGE 2048: Well, if our relationship is to be all that it can be, then it's not a bad idea."14
A moral challengeIf we take for granted that human life and the human species are worth preserving we have a moral disagreement with those who accept the prospect of machines inheriting the Earth.15
S. Russell—the user's mindThe environment modified by content-selection algorithms (e.g. those used in social media) is the user's mind. The goal is to make the mind more predictable.16
Dyson—worry less about intelligenceWe should worry more about self-reproduction, communication and (analog) control than machine intelligence.17
Smallberg—energy sources and replicationWhen superintelligent machines 'start replicating and looking for an energy source solely under their control', will have crossed a threshold.18
A digression on searchPerhaps they've crossed a threshold simply by looking for anything. Is 'looking for' the same as searching, or scanning?19
Dietterich—reproduction with autonomyIf there are four steps to an intelligence explosion—machines conducting experiments, discovering new structures, building mechanisms to exploit discoveries, and granting autonomy and resources to the new mechanisms themselves—it's the fourth step that's dangerous.20
2 The workWhile these ideas are troubling and fascinating and fun21 to think about, what to do?
Bostrom—deferred gratificationWe should stop work on high-minded, laudable long-term goals, for now, in favor of focusing on surviving the advent of superintelligence.22
Our civilization is evidence of capacityThe spectacular achievements of our civilization(s) are a hint that we can achieve the necessary spectacular requirements of AI safety (assuming the nature of the threat).
Skyscrapers seem taller than they areOn how tall things seem longer than far things, see Jackson and Cormack.23
ReferencesBostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford. First published in 2014. Citations are from the pbk. edition, 2016. ISBN: 978-0198739838. Searches: https://www.amazon.com/s?k=978-0198739838 https://www.google.com/search?q=isbn+978-0198739838 https://lccn.loc.gov/2015956648
Brockman, J. (Ed.) (2015). What to Think About Machines That Think: Today's Leading Thinkers on the Age of Machine Intelligence. Harper Perennial. ISBN: 978-0062425652. Searches: https://www.amazon.com/s?k=978-0062425652 https://www.google.com/search?q=isbn+978-0062425652 https://lccn.loc.gov/2016303054
Brockman, J. (Ed.) (2019). Possible Minds: Twenty-Five Ways of Looking at AI. Penguin. ISBN: 978-0525557999. Searches: https://www.amazon.com/s?k=978-0525557999 https://www.google.com/search?q=isbn+978-0525557999 https://lccn.loc.gov/2018032888
Butler, S. (1863). Darwin among the machines. The Press (Canterbury, New Zealand). Reprinted in Butler et al. (1923).
Butler, S., Jones, H., & Bartholomew, A. (1923). The Shrewsbury Edition of the Works of Samuel Butler Vol. 1. J. Cape. No ISBN. https://books.google.com/books?id=B-LQAAAAMAAJ Retrieved 27th Oct. 2020.
de Garis, H. (2005). The Artilect War: Cosmists vs. Terrans: A Bitter Controversy Concerning Whether Humanity Should Build Godlike Massively Intelligent Machines. ETC Publications. ISBN: 0882801546. Searches: https://www.amazon.com/s?k=0882801546 https://www.google.com/search?q=isbn+0882801546
Dietterich, T. G. (2015). How to prevent an intelligence explosion. (pp. 380–383). In Brockman (2015).
Dyson, G. (2019). The third law. (pp. 31–40). In Brockman (2019).
Dyson, G. B. (1997). Darwin Among The Machines: The Evolution Of Global Intelligence. Basic Books. ISBN: 978-0465031627. Searches: https://www.amazon.com/s?k=978-0465031627 https://www.google.com/search?q=isbn+978-0465031627 https://lccn.loc.gov/2012943208
Good, I. J. (1965). Speculations concerning the first ultraintelligent machine. Advances in Computers, 6, 31–88. https://exhibits.stanford.edu/feigenbaum/catalog/gz727rg3869 Retrieved 27th Oct. 2020.
Harris, S. (2016). Can we build AI without losing control over it? — Sam Harris. TED. https://youtu.be/8nt3edWLgIg Retrieved 28th Oct. 2020.
Holland, O. (2003). Exploration and high adventure: the legacy of Grey Walter. Phil. Trans. R. Soc. Lond. A, 361, 2085–2121. https://www.researchgate.net/publication/9025611 Retrieved 22nd Nov. 2019. See also: https://www.youtube.com/results?search_query=grey+walter+tortoise+
Jackson, R. E., & Cormack, L. K. (2008). Evolved navigation theory and the environmental vertical illusion. Evolution and Human Behavior, 29, 299–304. https://liberalarts.utexas.edu/cps/_files/cormack-pdf/12Evolved_navigation_theory2009.pdf Retrieved 29th Oct. 2020.
Kurzweil, R. (2005). The Singularity Is Near: When Humans Transcend Biology. Penguin. ISBN: 978-0143037880. Searches: https://www.amazon.com/s?k=978-0143037880 https://www.google.com/search?q=isbn+978-0143037880 https://lccn.loc.gov/2004061231
Legg, S., & Hutter, M. (2007a). A collection of definitions of intelligence. Frontiers in Artificial Intelligence and Applications, 157, 17–24. June 2007. https://arxiv.org/abs/0706.3639 Retrieved ca. 10 Mar. 2019.
Legg, S., & Hutter, M. (2007b). Universal intelligence: A definition of machine intelligence. Minds & Machines, 17(4), 391–444. December 2007. https://arxiv.org/abs/0712.3329 Retrieved ca. 10 Mar. 2019.
Retraice (2020/09/07). Re1: Three Kinds of Intelligence. retraice.com. https://www.retraice.com/segments/re1 Retrieved 22nd Sep. 2020.
Retraice (2020/09/08). Re2: Tell the People, Tell Foes. retraice.com. https://www.retraice.com/segments/re2 Retrieved 22nd Sep. 2020.
Russell, S. (2019). Human Compatible: Artificial Intelligence and the Problem of Control. Viking. ISBN: 978-0525558613. Searches: https://www.amazon.com/s?k=978-0525558613 https://www.google.com/search?q=isbn+978-0525558613 https://lccn.loc.gov/2019029688
Russell, S., & Norvig, P. (2020). Artificial Intelligence: A Modern Approach. Pearson, 4th ed. ISBN: 978-0134610993. Searches: https://www.amazon.com/s?k=978-0134610993 https://www.google.com/search?q=isbn+978-0134610993 https://lccn.loc.gov/2019047498
Simon, H. A. (1996). The Sciences of the Artificial. MIT, 3rd ed. ISBN: 0262691914. Searches: https://www.amazon.com/s?k=0262691914 https://www.google.com/search?q=isbn+0262691914 https://lccn.loc.gov/96012633 Previous editions available at: https://archive.org/search.php?query=The%20sciences%20of%20the%20artificial
Smallberg, G. (2015). No shared theory of mind. (pp. 297–299). In Brockman (2015).
Ulam, S. (1958). John von Neumann 1903-1957. Bull. Amer. Math. Soc., 64, 1–49. https://doi.org/10.1090/S0002-9904-1958-10189-5 Retrieved 29th Oct. 2020.
Weizenbaum, J. (1976). Computer Power and Human Reason: From Judgment to Calculation. W. H. Freeman and Company. ISBN: 0716704633. Also available at: https://archive.org/details/computerpowerhum0000weiz
Wolfram, S. (Ed.) (2002). A New Kind of Science. Wolfram Media, Inc. ISBN: 1579550088. Searches: https://www.amazon.com/s?k=1579550088 https://www.google.com/search?q=isbn+1579550088 https://lccn.loc.gov/2001046603
Yudkowsky, E. (2013). Intelligence explosion microeconomics. Machine Intelligence Research Institute. Technical report 2013-1. https://intelligence.org/files/IEM.pdf Retrieved ca. 9th Dec. 2018.
Yudkowsky, E. (2017). There's no fire alarm for artificial general intelligence. Machine Intelligence Research Institute. 13th Oct. 2017. https://intelligence.org/2017/10/13/fire-alarm/ Retrieved 9th Dec. 2018.
1Simon (1996) p. 6.
2During the livestream, Walter was referred to as 'Sir' Walter. This was an error; Walter was not knighted, as far as we're aware.
3Holland (2003) pp. 2104-2105. Holland is skeptical of the self-awareness interpretation of the tortoises' behavior.
4Butler (1863) pp. 184-185
5Dyson (1997) p. xii.
6Wolfram (2002) e.g. p. 30.
7Yudkowsky (2017).
8Good (1965) p. 33.
9See Yudkowsky (2013) p. 1 for sources on the discussion.
10Yudkowsky (2013) p. 9.
11Legg & Hutter (2007b), Legg & Hutter (2007a).
12Russell & Norvig (2020) p. 1001
13Ulam (1958) p. 5.
14Kurzweil (2005) p. 31.
15Russell & Norvig (2020) p. 1005. See also de Garis (2005) on 'cosmists', p. 81 ff.
16Russell (2019) pp. 8-9.
17Dyson (2019) p. 40. On control, see also Retraice (2020/09/08) p. 3, and Weizenbaum (1976) pp. 124-126.
18Smallberg (2015) p. 299.
19See Retraice (2020/09/07) p. 1 on a loose definition of artificial intelligence: "whatever it is that makes certain machines and computers seem to know things, and to act like they know things." [emphasis added]
20Dietterich (2015) p. 382
21On the 'fun' of death by science fiction, see Harris (2016) ca. 1:00 min ff.
22Bostrom (2014) p. 315.
23Jackson & Cormack (2008), especially p. 301 ('results').
By Retraice, Inc.For proper formatting (bold, italics, etc.) and graphics (where applicable) see the PDF version. Copyright: 2020 Retraice, Inc.
Re8: Strange MachinesA survey of the idea that technology is creatures.
Air date: Wednesday, 28th Oct. 2020, 3 : 30 PM Pacific/US.
1 The Problem We should call them something elseIf machines or artifacts have goals, we should call them creatures, or something else.
High-altitude fruitThis is just a sampling of the available ideas.
We focus on the books and papers because there's plenty of low-hanging fruit already available in electronic media—movies, documentaries, radio, TV, etc. It's hard to add homework to the already-heavy load of electronic production, so these sources don't tend to make the cut.
Simon—the rules are the sameWe can look at living systems the way we look at artifacts—as interfaces between inner- and outer-environments.1
Grey Walter's tortoisesAlong with biologically engineered organisms, some robots2 might be examples of life forms that did not, strictly speaking, evolve organically.3
Butler—war to the deathOur interests are inseparable from machines, but they're on the verge of reproduction, so we should destroy them.4
Dyson—they're not imaginarySmart computer programs that seem like creatures are very real.5
Wolfram's simple programsSimple cellular automata programs produce wild complexity.6
Yudkowsky on fire alarmsFire alarms make it socially acceptable to react to a situation that might be dangerous but not obvious. There will be no equivalent mechanism for dangerous AI.7
I. J. Good—take science fiction seriouslyIf machines can do what humans do, they can improve machines, which is themselves, which means boom.8
'unquestionably'If the argument focuses on whether it is certain that an intelligence explosion will happen, we're probably missing the point.9
Yudkowsky—smartish stuffWe shouldn't argue too much about whether something is truly intelligent; if it accomplishes dramatic things, our definitions don't matter.10
See Legg and Hutter for more technical work on the definition(s) of intelligence.11
S. Russell and Norvig—operating on their ownAI and robotics are unlike other dangerous technologies because of autonomy; we need to engineer safety, not hope for it.12
Two meanings of 'the singularity'John von Neumann gives the 'unpredictable' definition, as recollected and paraphrased by Stanislaw Ulam:
"Quite aware that the criteria of value in mathematical work are, to some extent, purely aesthetic, [von Neumann] once expressed an apprehension that the values put on abstract scientific achievement in our present civilization might diminish: 'The interests of humanity may change, the present curiosities in science may cease, and entirely different things may occupy the human mind in the future.' One conversation centered on the ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue."13
Ray Kurzweil gives the upgrade-or-bust definition, in dialogue form.
MOLLY 2004: … But seriously, I'm having trouble keeping up with all of this stuff flying at me as it is. How am I going to keep up with the pace of the Singularity? …
GEORGE 2048: … You'll be able to grasp what I'm up to if that's what you really want.
MOLLY 2004: What, by becoming…
RAY: Enhanced?
MOLLY 2004: Yes, that's what I was trying to say.
GEORGE 2048: Well, if our relationship is to be all that it can be, then it's not a bad idea."14
A moral challengeIf we take for granted that human life and the human species are worth preserving we have a moral disagreement with those who accept the prospect of machines inheriting the Earth.15
S. Russell—the user's mindThe environment modified by content-selection algorithms (e.g. those used in social media) is the user's mind. The goal is to make the mind more predictable.16
Dyson—worry less about intelligenceWe should worry more about self-reproduction, communication and (analog) control than machine intelligence.17
Smallberg—energy sources and replicationWhen superintelligent machines 'start replicating and looking for an energy source solely under their control', will have crossed a threshold.18
A digression on searchPerhaps they've crossed a threshold simply by looking for anything. Is 'looking for' the same as searching, or scanning?19
Dietterich—reproduction with autonomyIf there are four steps to an intelligence explosion—machines conducting experiments, discovering new structures, building mechanisms to exploit discoveries, and granting autonomy and resources to the new mechanisms themselves—it's the fourth step that's dangerous.20
2 The workWhile these ideas are troubling and fascinating and fun21 to think about, what to do?
Bostrom—deferred gratificationWe should stop work on high-minded, laudable long-term goals, for now, in favor of focusing on surviving the advent of superintelligence.22
Our civilization is evidence of capacityThe spectacular achievements of our civilization(s) are a hint that we can achieve the necessary spectacular requirements of AI safety (assuming the nature of the threat).
Skyscrapers seem taller than they areOn how tall things seem longer than far things, see Jackson and Cormack.23
ReferencesBostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford. First published in 2014. Citations are from the pbk. edition, 2016. ISBN: 978-0198739838. Searches: https://www.amazon.com/s?k=978-0198739838 https://www.google.com/search?q=isbn+978-0198739838 https://lccn.loc.gov/2015956648
Brockman, J. (Ed.) (2015). What to Think About Machines That Think: Today's Leading Thinkers on the Age of Machine Intelligence. Harper Perennial. ISBN: 978-0062425652. Searches: https://www.amazon.com/s?k=978-0062425652 https://www.google.com/search?q=isbn+978-0062425652 https://lccn.loc.gov/2016303054
Brockman, J. (Ed.) (2019). Possible Minds: Twenty-Five Ways of Looking at AI. Penguin. ISBN: 978-0525557999. Searches: https://www.amazon.com/s?k=978-0525557999 https://www.google.com/search?q=isbn+978-0525557999 https://lccn.loc.gov/2018032888
Butler, S. (1863). Darwin among the machines. The Press (Canterbury, New Zealand). Reprinted in Butler et al. (1923).
Butler, S., Jones, H., & Bartholomew, A. (1923). The Shrewsbury Edition of the Works of Samuel Butler Vol. 1. J. Cape. No ISBN. https://books.google.com/books?id=B-LQAAAAMAAJ Retrieved 27th Oct. 2020.
de Garis, H. (2005). The Artilect War: Cosmists vs. Terrans: A Bitter Controversy Concerning Whether Humanity Should Build Godlike Massively Intelligent Machines. ETC Publications. ISBN: 0882801546. Searches: https://www.amazon.com/s?k=0882801546 https://www.google.com/search?q=isbn+0882801546
Dietterich, T. G. (2015). How to prevent an intelligence explosion. (pp. 380–383). In Brockman (2015).
Dyson, G. (2019). The third law. (pp. 31–40). In Brockman (2019).
Dyson, G. B. (1997). Darwin Among The Machines: The Evolution Of Global Intelligence. Basic Books. ISBN: 978-0465031627. Searches: https://www.amazon.com/s?k=978-0465031627 https://www.google.com/search?q=isbn+978-0465031627 https://lccn.loc.gov/2012943208
Good, I. J. (1965). Speculations concerning the first ultraintelligent machine. Advances in Computers, 6, 31–88. https://exhibits.stanford.edu/feigenbaum/catalog/gz727rg3869 Retrieved 27th Oct. 2020.
Harris, S. (2016). Can we build AI without losing control over it? — Sam Harris. TED. https://youtu.be/8nt3edWLgIg Retrieved 28th Oct. 2020.
Holland, O. (2003). Exploration and high adventure: the legacy of Grey Walter. Phil. Trans. R. Soc. Lond. A, 361, 2085–2121. https://www.researchgate.net/publication/9025611 Retrieved 22nd Nov. 2019. See also: https://www.youtube.com/results?search_query=grey+walter+tortoise+
Jackson, R. E., & Cormack, L. K. (2008). Evolved navigation theory and the environmental vertical illusion. Evolution and Human Behavior, 29, 299–304. https://liberalarts.utexas.edu/cps/_files/cormack-pdf/12Evolved_navigation_theory2009.pdf Retrieved 29th Oct. 2020.
Kurzweil, R. (2005). The Singularity Is Near: When Humans Transcend Biology. Penguin. ISBN: 978-0143037880. Searches: https://www.amazon.com/s?k=978-0143037880 https://www.google.com/search?q=isbn+978-0143037880 https://lccn.loc.gov/2004061231
Legg, S., & Hutter, M. (2007a). A collection of definitions of intelligence. Frontiers in Artificial Intelligence and Applications, 157, 17–24. June 2007. https://arxiv.org/abs/0706.3639 Retrieved ca. 10 Mar. 2019.
Legg, S., & Hutter, M. (2007b). Universal intelligence: A definition of machine intelligence. Minds & Machines, 17(4), 391–444. December 2007. https://arxiv.org/abs/0712.3329 Retrieved ca. 10 Mar. 2019.
Retraice (2020/09/07). Re1: Three Kinds of Intelligence. retraice.com. https://www.retraice.com/segments/re1 Retrieved 22nd Sep. 2020.
Retraice (2020/09/08). Re2: Tell the People, Tell Foes. retraice.com. https://www.retraice.com/segments/re2 Retrieved 22nd Sep. 2020.
Russell, S. (2019). Human Compatible: Artificial Intelligence and the Problem of Control. Viking. ISBN: 978-0525558613. Searches: https://www.amazon.com/s?k=978-0525558613 https://www.google.com/search?q=isbn+978-0525558613 https://lccn.loc.gov/2019029688
Russell, S., & Norvig, P. (2020). Artificial Intelligence: A Modern Approach. Pearson, 4th ed. ISBN: 978-0134610993. Searches: https://www.amazon.com/s?k=978-0134610993 https://www.google.com/search?q=isbn+978-0134610993 https://lccn.loc.gov/2019047498
Simon, H. A. (1996). The Sciences of the Artificial. MIT, 3rd ed. ISBN: 0262691914. Searches: https://www.amazon.com/s?k=0262691914 https://www.google.com/search?q=isbn+0262691914 https://lccn.loc.gov/96012633 Previous editions available at: https://archive.org/search.php?query=The%20sciences%20of%20the%20artificial
Smallberg, G. (2015). No shared theory of mind. (pp. 297–299). In Brockman (2015).
Ulam, S. (1958). John von Neumann 1903-1957. Bull. Amer. Math. Soc., 64, 1–49. https://doi.org/10.1090/S0002-9904-1958-10189-5 Retrieved 29th Oct. 2020.
Weizenbaum, J. (1976). Computer Power and Human Reason: From Judgment to Calculation. W. H. Freeman and Company. ISBN: 0716704633. Also available at: https://archive.org/details/computerpowerhum0000weiz
Wolfram, S. (Ed.) (2002). A New Kind of Science. Wolfram Media, Inc. ISBN: 1579550088. Searches: https://www.amazon.com/s?k=1579550088 https://www.google.com/search?q=isbn+1579550088 https://lccn.loc.gov/2001046603
Yudkowsky, E. (2013). Intelligence explosion microeconomics. Machine Intelligence Research Institute. Technical report 2013-1. https://intelligence.org/files/IEM.pdf Retrieved ca. 9th Dec. 2018.
Yudkowsky, E. (2017). There's no fire alarm for artificial general intelligence. Machine Intelligence Research Institute. 13th Oct. 2017. https://intelligence.org/2017/10/13/fire-alarm/ Retrieved 9th Dec. 2018.
1Simon (1996) p. 6.
2During the livestream, Walter was referred to as 'Sir' Walter. This was an error; Walter was not knighted, as far as we're aware.
3Holland (2003) pp. 2104-2105. Holland is skeptical of the self-awareness interpretation of the tortoises' behavior.
4Butler (1863) pp. 184-185
5Dyson (1997) p. xii.
6Wolfram (2002) e.g. p. 30.
7Yudkowsky (2017).
8Good (1965) p. 33.
9See Yudkowsky (2013) p. 1 for sources on the discussion.
10Yudkowsky (2013) p. 9.
11Legg & Hutter (2007b), Legg & Hutter (2007a).
12Russell & Norvig (2020) p. 1001
13Ulam (1958) p. 5.
14Kurzweil (2005) p. 31.
15Russell & Norvig (2020) p. 1005. See also de Garis (2005) on 'cosmists', p. 81 ff.
16Russell (2019) pp. 8-9.
17Dyson (2019) p. 40. On control, see also Retraice (2020/09/08) p. 3, and Weizenbaum (1976) pp. 124-126.
18Smallberg (2015) p. 299.
19See Retraice (2020/09/07) p. 1 on a loose definition of artificial intelligence: "whatever it is that makes certain machines and computers seem to know things, and to act like they know things." [emphasis added]
20Dietterich (2015) p. 382
21On the 'fun' of death by science fiction, see Harris (2016) ca. 1:00 min ff.
22Bostrom (2014) p. 315.
23Jackson & Cormack (2008), especially p. 301 ('results').