I have been reading Eric Drexler's writing on the future of AI for more than a decade at this point. I love it, but I also think it can be tricky or frustrating.
More than anyone else I know, Eric seems to tap into a deep vision for how the future of technology may work — and having once tuned into this, I find many other perspectives can feel hollow. (This reminds me of how, once I had enough of a feel for how economies work, I found a lot of science fiction felt hollow, if the world presented made too little sense in terms of what was implied for off-screen variables.)
One cornerstone of Eric's perspective on AI, as I see it, is a deep rejection of anthropomorphism. People considering current AI systems mostly have no difficulty understanding it as technology rather than person. But when discussion moves to superintelligence … well, as Eric puts it:
Our expectations rest on biological intuitions. Every intelligence we’ve known arose through evolution, where survival was a precondition for everything else—organisms that failed to compete and preserve themselves left no descendants. Self-preservation wasn’t optional—it was the precondition for everything else. We naturally [...]
---
Outline:
(02:38) Difficulties with Drexler's writing
(03:35) How to read Drexler
(05:05) What Drexler covers
(05:14) 1) Mapping the technological trajectory
(05:47) 2) Pushing back on anthropomorphism
(06:34) 3) Advocating for strategic judo
(07:21) The missing topics
(08:26) Translation and reinvention
(09:47) Pieces I'd be especially excited to see explored
The original text contained 5 footnotes which were omitted from this narration.
---