The Nonlinear Library

LW - Why I am not a longtermist (May 2022) by boazbarak


Listen Later

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why I am not a longtermist (May 2022), published by boazbarak on June 6, 2023 on LessWrong.
[Posting verbatim my blog post from a year ago since it might be relevant to this audience, and I hope it could generate a good discussion. As far as I can tell, cross-posting old material is OK here, though do let me know if not, and I will delete it. I do not intend to cross-post any more old posts from my blog. Note that this post was written for non-LW audience that is not necessarily familiar with longtermism. The advice at the end is aimed mostly at technical folks rather than policy makers. A final note is that this was written before some scandals related to longtermism/EA, though these should not have an impact on the content . --Boaz]
“Longtermism” is a moral philosophy that places much more weight on the well-being of all future generations than on the current one. It holds that “positively influencing the long-term future is a key moral priority of our time,” where “long term” can be really long term, e.g., “many thousands of years in the future, or much further still.” At its core is the belief that each one of the potential quadrillion or more people that may exist in the future is as important as any single person today.
Longtermism has recently attracted attention, some of it in alarming tones. The reasoning behind longtermism is natural: if we assume that human society will continue to exist for at least a few millennia, many more people will be born in the future than are alive today. However, since predictions are famously hard to make, especially about the future, longtermism invariably gets wrapped up with probabilities. Once you do these calculations, preventing an infinitely bad outcome, even if it would only happen with tiny probability, will have infinite utility. Hence longtermism tends to focus on so-called “existential risk”: The risk that humanity will go through in an extinction event, like the one suffered by the Neanderthals or Dinosaurs, or another type of irreversible humanity-wise calamity.
This post explains why I do not subscribe to this philosophy. Let me clarify that I am not saying that all longtermists are bad people. Many “longtermists” have given generously to improve people's lives worldwide, particularly in developing countries. For example, none of the top charities of Givewell (an organization associated with the effective altruism movement, in which many prominent longtermists are members) focus on hypothetical future risks. Instead, they all deal with current pressing issues, including Malaria, childhood vaccinations, and extreme poverty. Overall the effective altruism movement has done much to benefit currently living people. Some of its members donated their kidneys to strangers: These are good people- morally better than me. It is hardly fair to fault people that are already contributing more than most others for caring about issues that I think are less significant.
Benjamin Todd’s estimates of Effective Altruism resource allocations
This post critiques the philosophy of longtermism rather than the particular actions or beliefs of “longtermists.” In particular, the following are often highly correlated with one another:
Belief in the philosophy of longtermism.
A belief that existential risk is not just a concern for the far-off future and a low-probability event, but there is a very significant chance of it happening in the near future (next few decades or at most a century).
A belief that the most significant existential risk could arise from artificial intelligence and that this is a real risk in the near future.
Here I focus on (1) and explain why I disagree with this philosophy. While I might disagree on specific calculations of (2) and (3), I fully agree with the need to think and act regarding near-term risks. Society ...
...more
View all episodesView all episodes
Download on the App Store

The Nonlinear LibraryBy The Nonlinear Fund

  • 4.6
  • 4.6
  • 4.6
  • 4.6
  • 4.6

4.6

8 ratings