Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Funding for work that builds capacity to address risks from transformative AI, published by GCR Capacity Building team (Open Phil) on August 14, 2024 on The Effective Altruism Forum.
Post authors: Eli Rose, Asya Bergal
Posting in our capacities as members of Open Philanthropy's Global Catastrophic Risks Capacity Building team.
This program, together with our separate program for programs and events on global catastrophic risk, effective altruism, and other topics, has replaced our 2021 request for proposals for outreach projects. If you have a project which was in-scope for that program but isn't for either of these, you can apply to our team's general application instead.
We think it's possible that the coming decades will see "transformative" progress in artificial intelligence, i.e., progress that leads to changes in human civilization at least as large as those brought on by the Agricultural and Industrial Revolutions. It is currently unclear to us whether these changes will go well or poorly, and we think that people today can do meaningful work to increase the likelihood of positive outcomes.
To that end, we're interested in funding projects that:
Help new talent get into work focused on addressing risks from transformative AI.
Including people from academic or professional fields outside computer science or machine learning.
Support existing talent in this field (e.g. via events that help build professional networks).
Contribute to the discourse about transformative AI and its possible effects, positive and negative.
We refer to this category of work as "capacity-building", in the sense of "building society's capacity" to navigate these risks. Types of work we've historically funded include training and mentorship programs, events, groups, and resources (e.g., blog posts, magazines, podcasts, videos), but we are interested in receiving applications for any capacity-building projects aimed at risks from advanced AI.
This includes applications from both organizations and individuals, and includes both full-time and part-time projects.
Apply for funding here. Applications are open until further notice and will be assessed on a rolling basis.
We're interested in funding work to build capacity in a number of different fields which we think may be important for navigating transformative AI, including (but very much not limited to) technical alignment research, model evaluation and forecasting, AI governance and policy, information security, and research on the economics of AI.
This program is not primarily intended to fund direct work in these fields, though we expect many grants to have both direct and capacity-building components - see below for more discussion.
Categories of work we're interested in
Training and mentorship programs
These are programs that teach relevant skills or understanding, offer mentorship or professional connections for those new to a field, or provide career opportunities. These could take the form of fellowships, internships, residencies, visitor programs, online or in-person courses, bootcamps, etc.
Some examples of training and mentorship programs we've funded in the past:
BlueDot's online courses on technical AI safety and AI governance.
MATS's in-person research and educational seminar programs in Berkeley, California.
ML4Good's in-person AI safety bootcamps in Europe.
We've previously funded a number of such programs in technical alignment research, and we're excited to fund new programs in this area. But we think other relevant areas may be relatively neglected - for instance, programs focusing on compute governance or on information security for frontier AI models.
For illustration, here are some (hypothetical) examples of programs we could be interested in funding:
A summer research fellowship for individuals with technical backgr...