The Nonlinear Library: Alignment Forum

AF - There should be more AI safety orgs by Marius Hobbhahn


Listen Later

Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: There should be more AI safety orgs, published by Marius Hobbhahn on September 21, 2023 on The AI Alignment Forum.
I'm writing this in my own capacity. The views expressed are my own, and should not be taken to represent the views of Apollo Research or any other program I'm involved with.
TL;DR: I argue why I think there should be more AI safety orgs. I'll also provide some suggestions on how that could be achieved. The core argument is that there is a lot of unused talent and I don't think existing orgs scale fast enough to absorb it. Thus, more orgs are needed. This post can also serve as a call to action for funders, founders, and researchers to coordinate to start new orgs.
This piece is certainly biased! I recently started an AI safety org and therefore obviously believe that there is/was a gap to be filled.
If you think I'm missing relevant information about the ecosystem or disagree with my reasoning, please let me know. I genuinely want to understand why the ecosystem acts as it does right now and whether there are good reasons for it that I have missed so far.
Why?
Before making the case, let me point out that under most normal circumstances, it is probably not reasonable to start a new organization. It's much smarter to join an existing organization, get mentorship, and grow the organization from within. Furthermore, building organizations is hard and comes with a lot of risks, e.g. due to a lack of funding or because there isn't enough talent to join early on.
My core argument is that we're very much NOT under normal circumstances and that, conditional on the current landscape and the problem we're facing, we need more AI safety orgs. By that, I primarily mean orgs that can provide full-time employment to contribute to AI safety but I'd also be happy if there were more upskilling programs like SERI MATS, ARENA, MLAB & co.
Talent vs. capacity
Frankly, the level of talent applying to AI safety organizations and getting rejected is too high. We have recently started a hiring round and we estimate that a lot more candidates meet a reasonable bar than we could hire. I don't want to go into the exact details since the round isn't closed but from the current applications alone, you could probably start a handful of new orgs.
Many of these people could join top software companies like Google, Meta, etc. or even already are at these companies and looking to transition into AI safety. Apollo is a new organization without a long track record, so I expect the applications for other alignment organizations to be even stronger.
The talent supply is so high that a lot of great people even have to be rejected from SERI MATS, ARENA, MLAB, and other skill-building programs that are supposed to get more people into the field in the first place. Also, if I look at the people who get rejected from existing orgs like Anthropic, OpenAI, DM, Redwood, etc. it really pains me to think that they can't contribute in a sustainable full-time capacity. This seems like a huge waste of talent and I think it is really unhealthy for the ecosystem, especially given the magnitude and urgency of AI safety.
Some people point to independent research as an alternative. I think independent research is a temporary solution for a small subset of people. It's not very sustainable and has a huge selection bias. Almost anyone with a family or with existing work experience is not willing to take the risk. In my experience, women also have a disproportional preference against independent research compared to men, so the gender balance gets even worse than it already is (this is only anecdotal evidence, I have not looked at this in detail). Furthermore, many people just strongly prefer working with others in a less uncertain, more regular environment of an organization, even if that organization is fairly...
...more
View all episodesView all episodes
Download on the App Store

The Nonlinear Library: Alignment ForumBy The Nonlinear Fund


More shows like The Nonlinear Library: Alignment Forum

View all
AXRP - the AI X-risk Research Podcast by Daniel Filan

AXRP - the AI X-risk Research Podcast

9 Listeners