
Sign up to save your podcasts
Or
The longtermist funding ecosystem needs certain functions to exist at a reasonable scale. I argue LTFF should continue to be funded because we're currently one of the only organizations comprehensively serving these functions. Specifically, we:
Getting these functions right takes meaningful resources - well over $1M annually. This figure isn't arbitrary: $1M funds roughly 10 person-years of work, split between supporting career transitions and independent research. Given what we're trying to achieve - from maintaining independent AI safety voices to seeding new fields like x-risk focused information security - this is arguably a minimum viable scale.
While I'm excited to see some of these functions [...]
---
Outline:
(01:55) Core Argument
(04:11) Key Functions Currently (Almost) Unique to LTFF
(04:17) Technical AI Safety Funding
(04:40) Why arent other funders investing in GCR-focused technical AI Safety?
(06:26) Career Transitions and Early Researcher Funding
(07:44) Why arent other groups investing in improving career transitions in existential risk reduction?
(09:05) Going Forwards
(10:01) Providing (Some) Counterbalance to AI Companies on AI Safety
(11:39) Going Forwards
(12:21) Funding New Project Areas and Approaches
(13:18) Going Forwards
(14:01) Broader Funding Case
(14:05) Why Current Funding Levels Matter
(16:02) Going Forwards
(17:36) Conclusion
(18:40) Appendix: LTFFs Institutional Features
(18:57) Transparency and Communication
(19:40) Operational Features
---
First published:
Source:
Narrated by TYPE III AUDIO.
The longtermist funding ecosystem needs certain functions to exist at a reasonable scale. I argue LTFF should continue to be funded because we're currently one of the only organizations comprehensively serving these functions. Specifically, we:
Getting these functions right takes meaningful resources - well over $1M annually. This figure isn't arbitrary: $1M funds roughly 10 person-years of work, split between supporting career transitions and independent research. Given what we're trying to achieve - from maintaining independent AI safety voices to seeding new fields like x-risk focused information security - this is arguably a minimum viable scale.
While I'm excited to see some of these functions [...]
---
Outline:
(01:55) Core Argument
(04:11) Key Functions Currently (Almost) Unique to LTFF
(04:17) Technical AI Safety Funding
(04:40) Why arent other funders investing in GCR-focused technical AI Safety?
(06:26) Career Transitions and Early Researcher Funding
(07:44) Why arent other groups investing in improving career transitions in existential risk reduction?
(09:05) Going Forwards
(10:01) Providing (Some) Counterbalance to AI Companies on AI Safety
(11:39) Going Forwards
(12:21) Funding New Project Areas and Approaches
(13:18) Going Forwards
(14:01) Broader Funding Case
(14:05) Why Current Funding Levels Matter
(16:02) Going Forwards
(17:36) Conclusion
(18:40) Appendix: LTFFs Institutional Features
(18:57) Transparency and Communication
(19:40) Operational Features
---
First published:
Source:
Narrated by TYPE III AUDIO.
26,362 Listeners
2,380 Listeners
7,924 Listeners
4,131 Listeners
87 Listeners
1,447 Listeners
8,922 Listeners
88 Listeners
379 Listeners
5,425 Listeners
15,206 Listeners
475 Listeners
121 Listeners
77 Listeners
455 Listeners