Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My guess at Conjecture's vision: triggering a narrative bifurcation, published by Alexandre Variengien on February 6, 2024 on LessWrong.
Context
The first version of this document was originally written in the summer of 2023 for my own sake while interning at Conjecture and shared internally. It was written as an attempt to pass an ideological Turing test. I am now posting an edited version of it online after positive feedback from Conjecture. I left Conjecture in August, but I think the doc still roughly reflects the current Conjecture's strategy. Members of Conjecture left comments on a draft of this post.
Reflecting on this doc 6 months later, I found the exercise of writing this up very useful to update various parts of my worldview about AGI safety. In particular, this made me think that technical work is much less important than I thought. I found the idea of triggering a narrative bifurcation a very helpful framing to think about AI safety efforts in general, outside of the special case of Conjecture.
Post outline
In sections 1-2:
I'll share a set of general models I use to think about societal development generally, beyond the special case of AGI development. These sections are more philosophical in prose. They describe:
How memes craft default futures that influence the trajectory of a society by defining what "no action" means. (sec. 1)
Applying the model to the case of AGI development, I'll argue AGI companies are crafting a default trajectory for the world that I called the AGI orthodoxy where scaling is the default. (sec. 2)
In sections 3-6:
I'll share elements useful to understand Conjecture's strategy (note that I don't necessarily agree with all these points).
Describe my best guess of Conjecture's read of the situation. Their strategy makes sense once we stop thinking of Conjecture as a classical AI safety org but instead see their main goal being triggering a bifurcation in the narratives used to talk about AGI development. By changing narratives, the goal is to provoke a world bifurcation where the safety mindset is at the core of AGI development (sec. 3-4).
Talk about how the CoEm technical agenda is an AI safety proposal under relaxed constraints. To work, the technical agenda requires that we shift the narrative surrounding AGI development. (sec. 5).
End with criticism of this plan as implemented by Conjecture (sec. 6).
By "Conjecture vision" I don't mean the vision shared by a majority of the employees, instead, I try to point at a blurry concept that is "the global vision that informs the high-level strategic decisions".
Introduction
I have been thinking about the
CoEm agenda and in particular the broader set of considerations that surrounds the core technical proposal. In particular, I tried to think about the question: "If I were the one deciding to pursue the CoEm agenda and the broader Conjecture's vision, what would be my arguments to do so?".
I found that the technical agenda was not a stand-alone, but along with beliefs about the world and a non-technical agenda (e.g. governance, communication, etc.), it fits in in a broader vision that I called triggering a narrative bifurcation (see the diagram below).
1 - A world of stories
The sculptor and the statue. From the dawn of time, our ancestors' understanding of the world was shaped by stories. They explained thunder as the sound of a celestial hammer, the world's creation through a multicolored snake, and human emotions as the interplay of four bodily fluids.
These stories weren't just mental constructs; they spurred tangible actions and societal changes. Inspired by narratives, people built temples, waged wars, and altered natural landscapes. In essence, stories, through their human bodies, manifested in art, architecture, social structures, and environmental impacts.
This interaction g...