Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Exploring Ergodicity in the Context of Longtermism, published by Arthur Jongejans on March 30, 2024 on The Effective Altruism Forum.
___________________________________________________
tldr;
Expected value theory misrepresents ruin games and obscures the dynamics of repetitions in a multiplicative environment.
The ergodicity framework provides a better perspective on such problems as it takes these dynamics into account.
Incorporating the ergodicity framework into decision-making can help prevent the EA movement from inadvertently increasing existential risks by rejecting high expected value but multiplicatively risky interventions that could lead to catastrophic outcomes.
___________________________________________________
Effective Altruism (EA) has embraced longtermism as one of its guiding principles. In What we owe the future, MacAskill lays out the foundational principles of longtermism, urging us to expand our ethical considerations to include the well-being and prospects of future generations.
Thinking in Bets
In order to consider the changes one could make in the world, MacAskill argues one should be "Thinking in Bets". To do so, expected value (EV) theory is employed on the account that it is the most widely accepted method. In the book, he describes the phenomenon with an example of his poker-playing friends:
"Liv and Igor are at a pub, and Liv bets Igor that he can't flip and catch six coasters at once with one hand. If he succeeds, she'll give him £3; if he fails, he has to give her £1. Suppose Igor thinks there's a fifty-fifty chance that he'll succeed. If so, then it's worth it for him to take the bet: the upside is a 50 percent chance of £3, worth £1.50; the downside is a 50 percent chance of losing £1, worth negative £0.50. Igor makes an expected £1 by taking the bet - £1.50 minus £0.50.
If his beliefs about his own chances of success are accurate, then if he were to take this bet over and over again, on average he'd make £1 each time."
More theoretically, he breaks expected value theory down into three components:
Thinking in probabilities
Assigning values to outcomes (What economists call Utility Theory)
Taking a decision based on the expected value
This logic served EA well during the early neartermist days of the movement, where it was used to answer questions like: "Should the marginal dollar be used to buy bednets against malaria or deworming pills to improve school attendance?".
The Train to Crazy Town
Yet problems arise when such reasoning is followed into more extreme territory. For example, based on its consequentialist nature, EA-logic prescribes pulling the handle in the Trolley Problem[1]. However, many Effective Altruists (EAs) hesitate to follow this reasoning all the way to its logical conclusion.
Consider for instance whether you are willing to take the following gamble: you're offered to press a button with a 51% chance of doubling the world's happiness but a 49% chance of ending it.
This problem, also known as Thomas Hurka's St Petersburg Paradox, highlights the following dilemma: Maximizing expected utility suggests you should press it, as it promises a net positive outcome. However, the issue arises when pressing the button multiple times. Despite each press theoretically maximizing utility, pressing the button over and over again will inevitably lead to destruction.
Which highlights the conflict between utility maximization and the catastrophic risk of repeated gambles.[2] In simpler terms, the impact of repeated bets is concealed behind the EV.
In EA-circles, following the theory to its logical extremes has become known as catching "The train to crazy town"[[3],[4]]. The core issue with this approach is that, while most people want to get off the train before crazy town, the consequentialist expected value framework does not al...