Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Long-Term Future Fund: March 2024 Payout recommendations, published by Linch on June 12, 2024 on The Effective Altruism Forum.
Introduction
This payout report covers the Long-Term Future Fund's grantmaking from May 1 2023 to March 31 2024 (11 months). It follows our previous April 2023 payout report.
Total funding recommended: $6,290,550
Total funding paid out: $5,363,105
Number of grants paid out: 141
Acceptance rate (excluding desk rejections): 159/672 = 23.7%
Acceptance rate (including desk rejections): 159/825 = 19.3%
Report authors: Linchuan Zhang (primary author), Caleb Parikh (fund chair), Oliver Habryka, Lawrence Chan, Clara Collier, Daniel Eth, Lauro Langosco, Thomas Larsen, Eli Lifland
25 of our grantees, who received a total of $790,251, requested that our public reports for their grants are anonymized (the table below includes those grants). 13 grantees, who received a total of $529, 819, requested that we not include public reports for their grants. You can read our policy on public reporting here.
We referred at least 2 grants to other funders for evaluation.
Highlighted Grants
(The following grants writeups were written by me, Linch Zhang. They were reviewed by the primary investigators of each grant).
Below, we highlighted some grants that we thought were interesting and covered a relatively wide scope of LTFF's activities. We hope that reading the highlighted grants can help donors make more informed decisions about whether to donate to LTFF.[1]
Gabriel Mukobi ($40,680) - 9-month university tuition support for technical AI safety research focused on empowering AI governance interventions
The Long-Term Future Fund provided a $40,680 grant to Gabriel Mukobi from September 2023 to June 2024, originally for 9 months of university tuition support. The grant enabled Gabe to pursue his master's program in Computer Science at Stanford, with a focus on technical AI governance.
Several factors favored funding Gabe, including his strong academic background (4.0 GPA in Stanford CS undergrad with 6 graduate-level courses), experience in difficult technical AI alignment internships (e.g., at the Krueger lab), and leadership skills demonstrated by starting and leading the Stanford AI alignment group.
However, some fund managers were skeptical about the specific proposed technical research directions, although this was not considered critical for a skill-building and career-development grant. The fund managers also had some uncertainty about the overall value of funding Master's degrees.
Ultimately, the fund managers compared Gabe to marginal MATS graduates and concluded that funding him was favorable. They believed Gabe was better at independently generating strategic directions and being self-motivated for his work, compared to the median MATS graduate.
They also considered the downside risks and personal costs of being a Master's student to be lower than those of independent research, as academia tends to provide more social support and mental health safeguards, especially for Master's degrees (compared to PhDs). Additionally, Gabe's familiarity with Stanford from his undergraduate studies was seen as beneficial on that axis.
The fund managers also recognized the value of a Master's degree credential for several potential career paths, such as pursuing a PhD or working in policy. However, a caveat is that Gabe might have less direct mentorship relevant to alignment compared to MATS extension grantees.
Outcomes: In a recent progress report, Gabe noted that the grant allowed him to dedicate more time to schoolwork and research instead of taking on part-time jobs. He produced several new publications that received favorable media coverage and was accepted to 4 out of 6 PhD programs he applied to. The grant also allowed him to finish graduating in March instead of Ju...