Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: CEA seeks co-founder for AI safety group support spin-off, published by Agustín Covarrubias on April 8, 2024 on The Effective Altruism Forum.
Summary
CEA is currently inviting expressions of interest for co-founding a promising new project focused on providing non-monetary support to AI safety groups. We're also receiving recommendations for the role.
CEA is helping incubate this project and plans to spin it off into an independent organization (or a fiscally sponsored project) during the next four months.
The successful candidate will join the first co-founder hired for this project, Agustín Covarrubias, who has been involved in planning this spin-off for the last couple of months.
The commitment of a second co-founder is conditional on the project receiving funding, but work done until late July will be compensated by CEA (see details below). Note that besides initial contracting, this role is not within CEA.
The deadline for expressions of interest and recommendations is April 19.
Background
Currently, CEA provides support to AI safety university groups through programs like the Organizer Support Program (OSP). For the last two semesters, OSP has piloted connecting AI safety organizers with experienced mentors to guide them. CEA has also supported these organizers through events for community builders - like the recent University Group Organiser Summit - to meet one another, discuss strategic considerations, skill up, and boost participants' motivation.
Even though these projects have largely accomplished CEA's goals, AI safety groups could benefit from more ambitious, specialized, and consistent support. We are leaving a lot of impact on the table.
Furthermore, until now, AI safety groups' approach to community building has been primarily modelled after EA groups. While EA groups serve as a valuable model, we've seen early evidence that not all of their approaches and insights transfer perfectly. This means there's an opportunity to experiment with alternative community-building models and test new approaches to supporting groups.
For these reasons, CEA hired Agustín Covarrubias to incubate a new project. The project will encompass the support CEA is already giving AI safety groups, plus provide the opportunity to explore new ways to help these groups grow and prosper. The result will be a CEA spin-off that operates as a standalone organization or a fiscally sponsored project.
Since AI Safety groups are not inherently linked to EA, we think spinning out also allows this project to broaden its target audience (of organizers, for example).
We're now looking to find a co-founder for this new entity and invite expressions of interest and recommendations. We think this is a compelling opportunity for people passionate about AI safety and community building to address a critical need in this space.
Our vision
We think growing and strengthening the ecosystem of AI safety groups is among the most promising fieldbuilding efforts. These groups have the potential to evolve into thriving talent and resource hubs, creating local momentum for AI safety, helping people move to high-impact careers, and helping researchers, technologists, and even advocates collaborate in pursuit of a shared mission.
We also think some of these groups have a competitive advantage in leveraging local ecosystems; for example, we've seen promising results from groups interacting with faculty, research labs, and policy groups.
But this won't happen by default. It will take careful, proactive nurturing of these groups' potential. We're ready to fill this important gap. Our vision for the new organization is to:
Provide scalable but targeted support to existing and future AI safety groups.
Build the infrastructure needed to grow the ecosystem in less than six months.
Scale proportionally to accommodate...