EA Forum Podcast (Curated & popular)

“The case for AI safety capacity-building work” by abergal


Listen Later

I work on the capacity-building team on the Global Catastrophic Risks-half of Coefficient Giving (formerly known as Open Philanthropy). Our remit is, roughly, to increase the amount of talent aiming to prevent unprecedented, globally catastrophic events. These days, we’re mostly focused on AI, and we’ve funded a number of projects and grantees that readers of this post might be familiar with– including MATS, BlueDot Impact, Constellation, 80,000 Hours, CEA, the Curve, FAR.AI's events, university groups, and many other workshops and projects.

The post aims to make the case that broadly, capacity-building work (including on AI risk) has been and continues to be extremely impactful, and to encourage people to consider pursuing relevant projects and careers.

This post is written from my personal perspective; that said, my sense is that a number of CG staff and others in the AI safety space share my views. I include some quotes from them at the end of this post.

I’m writing this post partly out of a desire to correct what I perceive as an asymmetry in terms of how excited I and others at Coefficient Giving are about this kind of work vs. how much people in the EA and AI [...]

---

Outline:

(02:15) The case for capacity-building work

(04:11) Surveys

(06:49) Testimonials

(08:21) Neel Nanda (Senior Research Scientist at Google DeepMind)

(11:15) Max Nadeau (Associate Program Officer (Technical AI Safety) at Coefficient Giving)

(12:51) Rachel Weinberg (founder and former head of The Curve, currently at AI Futures Project)

(14:30) Marius Hobbhann (CEO and founder of Apollo Research)

(16:38) Adam Kaufman (member of technical staff at Redwood Research)

(18:10) Gabriel Wu (member of technical staff (alignment) at OpenAI)

(19:37) Catherine Brewer (Senior Program Associate (AI Governance) at Coefficient Giving)

(21:12) Aric Floyd (video host for AI in Context)

(23:12) Ryan Kidd (Director of MATS)

(25:43) What tends to work?

(28:34) Whats good to do now?

(29:31) Who should be doing this work?

(31:02) What would doing this work look like?

(31:13) Working at an organization doing good work in the space

(31:46) Constellation - CEO

(32:46) Kairos - various early generalist positions

(33:42) Starting or running your own capacity-building project or organization

(34:07) Working on a capacity-building project part-time

(34:30) Subscribing to Multiplier, a Substack with thoughts from our team (and other AI grantmaking staff at CG)

(34:39) Letting our team know

(35:03) Social proof

(35:25) Julian Hazell, AI governance and policy at Coefficient Giving

(36:19) Trevor Levin, AI governance and policy at Coefficient Giving

(36:51) Ryan Greenblatt, Chief Scientist at Redwood Research:

(37:21) Buck Shlegeris, CEO of Redwood Research

(39:52) Appendix

---

First published:

March 10th, 2026

Source:

https://forum.effectivealtruism.org/posts/rAqKSSXankvys2Fzu/the-case-for-ai-safety-capacity-building-work

---

Narrated by TYPE III AUDIO.

...more
View all episodesView all episodes
Download on the App Store

EA Forum Podcast (Curated & popular)By EA Forum Team

  • 4.9
  • 4.9
  • 4.9
  • 4.9
  • 4.9

4.9

9 ratings


More shows like EA Forum Podcast (Curated & popular)

View all
Clearer Thinking with Spencer Greenberg by Spencer Greenberg

Clearer Thinking with Spencer Greenberg

137 Listeners