LessWrong (30+ Karma)

“A Qualitative Case for LTFF: Filling Critical Ecosystem Gaps” by Linch


Listen Later

The longtermist funding ecosystem needs certain functions to exist at a reasonable scale. I argue LTFF should continue to be funded because we're currently one of the only organizations comprehensively serving these functions. Specifically, we:

  • Fund early-stage technical AI safety researchers working outside major labs
  • Help talented people transition into existential risk work
  • Provide independent funding voices to balance out AI company influence
  • Move quickly to fund promising work in emerging areas, including "weirder" but potentially high-impact projects

Getting these functions right takes meaningful resources - well over $1M annually. This figure isn't arbitrary: $1M funds roughly 10 person-years of work, split between supporting career transitions and independent research. Given what we're trying to achieve - from maintaining independent AI safety voices to seeding new fields like x-risk focused information security - this is arguably a minimum viable scale.

While I'm excited to see some of these functions [...]

---

Outline:

(01:55) Core Argument

(04:11) Key Functions Currently (Almost) Unique to LTFF

(04:17) Technical AI Safety Funding

(04:40) Why arent other funders investing in GCR-focused technical AI Safety?

(06:26) Career Transitions and Early Researcher Funding

(07:44) Why arent other groups investing in improving career transitions in existential risk reduction?

(09:05) Going Forwards

(10:01) Providing (Some) Counterbalance to AI Companies on AI Safety

(11:39) Going Forwards

(12:21) Funding New Project Areas and Approaches

(13:18) Going Forwards

(14:01) Broader Funding Case

(14:05) Why Current Funding Levels Matter

(16:02) Going Forwards

(17:36) Conclusion

(18:40) Appendix: LTFFs Institutional Features

(18:57) Transparency and Communication

(19:40) Operational Features

---

First published:

November 18th, 2024

Source:

https://www.lesswrong.com/posts/EkmEozr4Y5KxeJtp7/a-qualitative-case-for-ltff-filling-critical-ecosystem-gaps

---

Narrated by TYPE III AUDIO.

...more
View all episodesView all episodes
Download on the App Store

LessWrong (30+ Karma)By LessWrong


More shows like LessWrong (30+ Karma)

View all
Making Sense with Sam Harris by Sam Harris

Making Sense with Sam Harris

26,362 Listeners

Conversations with Tyler by Mercatus Center at George Mason University

Conversations with Tyler

2,380 Listeners

The Peter Attia Drive by Peter Attia, MD

The Peter Attia Drive

7,924 Listeners

Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas by Sean Carroll | Wondery

Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas

4,131 Listeners

ManifoldOne by Steve Hsu

ManifoldOne

87 Listeners

Your Undivided Attention by Tristan Harris and Aza Raskin, The Center for Humane Technology

Your Undivided Attention

1,447 Listeners

All-In with Chamath, Jason, Sacks & Friedberg by All-In Podcast, LLC

All-In with Chamath, Jason, Sacks & Friedberg

8,922 Listeners

Machine Learning Street Talk (MLST) by Machine Learning Street Talk (MLST)

Machine Learning Street Talk (MLST)

88 Listeners

Dwarkesh Podcast by Dwarkesh Patel

Dwarkesh Podcast

379 Listeners

Hard Fork by The New York Times

Hard Fork

5,425 Listeners

The Ezra Klein Show by New York Times Opinion

The Ezra Klein Show

15,206 Listeners

Moonshots with Peter Diamandis by PHD Ventures

Moonshots with Peter Diamandis

475 Listeners

No Priors: Artificial Intelligence | Technology | Startups by Conviction

No Priors: Artificial Intelligence | Technology | Startups

121 Listeners

Latent Space: The AI Engineer Podcast by swyx + Alessio

Latent Space: The AI Engineer Podcast

77 Listeners

BG2Pod with Brad Gerstner and Bill Gurley by BG2Pod

BG2Pod with Brad Gerstner and Bill Gurley

455 Listeners