Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI-kills-everyone scenarios require robotic infrastructure, but not necessary nanotech, published by avturchin on April 3, 2023 on LessWrong.
Tl;DR: Nanotech seems the only option for AI to kill everybody; but we can imagine a prosaic-AI-killing-everyone using three steps: taking over a country, building robotic infrastructure, releasing biological weapons.
Arguments why nanotech is needed for AI-kills-everyone
The concept of AI as an existential threat, as portrayed by EY, largely hinges on the development of manufacturing nanotechnology, which entails the creation of microscopic self-replicating robots capable of manipulating matter with atomic precision. This enables the "copying of a berry." A superintelligent AI must first establish a nanotech infrastructure to conquer the world and eliminate all humans, potentially by ordering protein-coding DNA sequences if it can calculate the correct sequences.
Criteria of robotic infrastructure
A nanotech infrastructure is essential for a comprehensive AI annihilation due to the following factors:
a) Concealment: AI's actions will remain undetected until the final strike.
b) Rapid elimination: AI will eradicate all humans instantaneously, preventing any interference.
c) Human-independence: Can operate without human involvement.
d) Swift construction: Nanotech can be developed rapidly, taking only a few weeks to synthesize required DNA strains and a few days for the initial nanobot replication. This grants a significant strategic advantage to the first AGI.
e) Route to superintelligence: Nanotech provides superior computational power compared to chips, enabling the first AI to rapidly ascend to an unparalleled superintelligence level. However, it is assumed that the first AI is already mildly superintelligent, as it can design the initial nanobots.
These factors also represent potential vulnerabilities that our last line of AI safety defense could target. For instance, shutting down DNA synthesis labs or developing our own nanotechnology and detection methods could reduce the efficacy of an AI-nanotech assault.
Few alternative AI infrastructure ideas possess all these characteristics, except perhaps:
a) A scenario where AI takeover occurs in a fully robotized world, with every household owning a home robot;
b) A form of biotechnology where AI can program biological organisms to execute tasks. However, this is a variation of nanotechnology, and AI computations cannot migrate into biological substrates.
Why these criteria?
The necessity for nanotech infrastructure in AI-kills-all situations arises from several factors:
If AI constructs a "conventional" robotic infrastructure, it will be visible and attacked before completion, increasing risks for the AI.
If AI cannot replace all humans, it remains vulnerable, as it requires a constant electricity supply, unlike humans. Destroying the electrical grid exposes AI to danger.
If AI cannot eradicate everyone instantaneously, humans will have time to retaliate.
If AI does not migrate to a nanotech-based computational substrate operating on independent energy sources, it remains dependent on a few data centers which are susceptible to airstrikes, sabotage, kill-switch codes, and power outages.
If AI does not gain computational and data advantages from nanotechnology, other AIs will soon achieve similar intelligence levels.
However, many people assign low prior probabilities to both nanotechnology and superintelligence, and their combination yields an even lower estimate, explaining much of the skepticism surrounding AI risk.
Therefore, it is reasonable to investigate catastrophic scenarios that do not rely on ideas with low prior probabilities.
No-miracle scenario where AI kill everybody
Imagine a possible world, where AI with IQ above 1000 is impossible and also nanotech doesn't work...