Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: “Reframing Superintelligence” + LLMs + 4 years, published by Eric Drexler on July 10, 2023 on The AI Alignment Forum.
Background
In January 2019, FHI published Reframing Superintelligence, a book-length technical report on prospects for advanced AI. OpenAI published the first paper on GPT-2 a month later. Advances since then have been strange and rapid, and I’d like to revisit the report in light of what we have learned. In brief, I think that the abstract conceptual model of AI development and organization proposed in Reframing fits today’s reality quite well, even though LLM-based technologies have diverged far from anything I’d anticipated.
Below, you'll find an abstract of the abstract of the report, followed by a series of section-level mini-summaries with update comments. I’ve omitted sections that are either outside the intended focus of this article or are too broad and forward-looking to summarize.
A significant impetus behind “Reframing Superintelligence” was to challenge a prevailing notion of advanced AI (equating superintelligent-level AI with a superintelligent agent), which has, in my view, been assigned disproportionate weight and skewed the balance of alignment research. The report offers an alternative framework that includes both risks and opportunities that are overlooked by agent-centric perspectives.
Note that this reframing is additive rather than subtractive: My intention is not to disregard agent-focused concerns - their importance is assumed, not debated. Indeed, the AI services model anticipates a world in which dangerous superintelligent agents could emerge with relative ease, and perhaps unavoidably. My aim is to broaden the working ontology of the community to include systems in which superintelligent-level capabilities can take a more accessible, transparent, and manageable form, open agencies rather than unitary agents. This framework highlights different risks and expands the the solution-space for familiar problems.
Finally, when I refer “LLMs”, please read this as encompassing multimodal models (GPT-4!) with considerations that carry over to a wider range of foundation models.
Abstract of the Abstract
“Reframing Superintelligence” reviews the concept of superintelligent AI systems as utility-driven agents and suggests expanding our ontology of superintelligence to include compositions of AI systems that can best be understood through their structures, relationships, development processes, and the services they can provide - services that can include AI research and development itself. This perspective gives rise to the “Comprehensive AI Services” (CAIS) model, which proposes general intelligence as a property of flexible systems of services in which task-focused agents are among the components. The CAIS model envisions AI services expanding toward asymptotically comprehensive superintelligent-level performance, including the service of providing new services in line with human objectives and informed by strong models of human (dis)approval.
This reframing has broad implications for AI prospects, including AI safety and strategy, practical applications of advanced AI systems, and the fundamental relationship between goals and intelligence. In this context, the emergence of strongly self-modifying agents with superintelligent-level capabilities remains a concern, yet the desirability and potential instrumental value of such agents is greatly diminished.
Section mini-summaries + updates
1. R&D automation provides the most direct path to an intelligence explosion
Self-transforming AI agents have no natural role in recursive improvement. A more direct path would instead involve AI-enabled AI development in which new capabilities are implemented without any system being self-modifying.
Today’s most striking applications of AI to AI develo...