Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: There Should Be More Alignment-Driven Startups!, published by Matthew "Vaniver" Gray on May 31, 2024 on The AI Alignment Forum.
Many thanks to Brandon Goldman, David Langer, Samuel Härgestam, Eric Ho, Diogo de Lucena, and Marc Carauleanu, for their support and feedback throughout.
Most alignment researchers we sampled in our recent survey think we are currently not on track to succeed with alignment-meaning that humanity may well be on track to lose control of our future.
In order to improve our chances of surviving and thriving, we should apply our most powerful coordination methods towards solving the alignment problem. We think that startups are an underappreciated part of humanity's toolkit, and having more AI-safety-focused startups would increase the probability of solving alignment.
That said, we also appreciate that AI safety is highly complicated by nature[1] and therefore calls for a more nuanced approach than simple pro-startup boosterism. In the rest of this post, we'll flesh out what we mean in more detail, hopefully address major objections, and then conclude with some pro-startup boosterism.
Expand the alignment ecosystem with startups
We applaud and appreciate current efforts to align AI. We could and should have many more. Founding more startups will develop human and organizational capital and unlock access to financial capital not currently available to alignment efforts.
"The much-maligned capitalism is actually probably the greatest incentive alignment success in human history" - Insights from Modern Principles of Economics
The alignment ecosystem is limited on entrepreneurial thinking and behavior. The few entrepreneurs among us commiserate over this whenever we can.
We predict that many interested in alignment seem to do more to increase P(win) if they start thinking of themselves as problem-solvers specializing in a particular sub-problem first, deploying whatever approaches are appropriate in order to solve the smaller problem. Note this doesn't preclude scaling ambitiously and solving bigger problems later on.[2]
Running a company that is targeting a particular niche of the giant problem seems like one of the best ways to go about this transition, unlocking a wealth of best practices that could be copied. For example, we've seen people in this space raise too little, too late, resulting in spending unnecessary time in the fundraising stage instead of doing work that advances alignment.
We think this is often the result of not following a more standard playbook on how and when to raise, which could be done without compromising integrity and without being afraid to embrace the fact that they are doing a startup rather than a more traditional (non-profit) AI safety org.[3]
We think creating more safety-driven startups will both increase capital availability in the short-term (as more funding might be available for for-profit investments than non-profit donations) and in the long-term (as those companies succeed and have money to invest and create technically skilled and safety-motivated employees who have the resources to themselves be investors or donors for other projects).
The creation of teams that have successfully completed projects together-organizational capital-will also better prepare the ecosystem to respond to new challenges as they arise. The organic structures formed by market systems allow for more dynamic and open allocation of people and resources to solve problems as they arise.
We also think that it is possible that alignment research will benefit from and perhaps even require significant resources that existing orgs may be too hesitant to spend. OpenAI, for example, never allocated the resources it promised to its safety team, and it has received pressure from corporate partners to be more risk-averse investing in R&D after ...