By David Stephen
There is a recent [September 19, 2025] book review in The Atlantic, The Useful Idiots of AI Doomsaying, stating that, "The authors' basic claim is that AI will continue to improve, getting smarter and smarter, until it either achieves superintelligence or designs something that will. Without careful training, they argue, the goals of this supreme being would be incompatible with human life. Their larger argument is that if humans build something that eclipses human intelligence, it will be able to outsmart us however it chooses, for its own self-serving goals. The risks are so grave, the authors argue, that the only solution is a complete shutdown of AI research."
Superintelligence : If Anyone Builds It, Everyone Dies
"This line of thinking takes as a given that intelligence is a discrete, measurable concept, and that increasing it is a matter of resources and processing power. But intelligence doesn't work like that. The human ability to predict and steer situations is not a single, broadly applicable skill or trait - someone may be brilliant in one area and trash in another. Einstein wasn't a great novelist; the chess champion Bobby Fischer was a paranoid conspiracy theorist. We even see this across species: Most humans can do many cognitive tasks that bats can't, but no human can naturally match a bat's ability to hunt for prey by quickly integrating complex echolocation data."
"They give no citation to the scientific literature for this claim, because there isn't a consensus on it. This is just one of many unwarranted logical and factual leaps in If Anyone Builds It. Along the way to such drastic conclusions, Yudkowsky and Soares fail to make an evidence-based scientific case for their claims. Instead, they rely on flat assertions and shaky analogies, leaving massive holes in their logic. The largest of these is the idea of superintelligence itself, which the authors define as "a mind much more capable than any human at almost every sort of steering and prediction problem.""
What is the 'evidence-based scientific case' for human intelligence?
If there is no known science of the mechanism of human intelligence and there is no general definition for human intelligence, then it is probably meritless to disdain predictions of risks - of an emerging non-biological intelligence.
It is possible to have an issue with the extremisms of 'AI will solve everything' and that 'AI will kill everyone', but to do so without proposing what exactly human intelligence is or how it works, is also 'unwarranted logical and factual leap'. What is ahead, if AI improves, may remain unknown, but what large language models [LLMs] currently are, should frighten those who are seeking out human intelligence.
In the human brain, assuming chair is a memory, intelligence is the use of that chair. While it appears that knowing things and using them sometimes go together, intelligence can be defined as the use of what is known for expected, desired or advantageous outcomes. Although planning, creativity, innovation and so forth intersect with this definition, intelligence can be broadly assumed to be how knowledge is used.
This means that knowing is one layer, then using that knowledge for outcomes is another. In general, humans are trained both for knowledge and intelligence, since it is possible to have knowledge but not the intelligence for it. Across organisms, survival is mostly a play of using what is known. Avoiding predators, catching prey, other necessities for life are knowing and using. Bats can use 'echolocation data' and humans can't, naturally, while humans can use complex languages but bats, cannot. [Knowing and using.] While knowing can be basic sensory interpretation in memory, using can be aligned with ability. So, there are abilities to use what is known. For humans, abilities to use memory, including with language, exceed other organisms. LLMs have a wider memory use capability now - with data, algorithms and compute.
...