Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Implications of evidential cooperation in large worlds, published by Lukas Finnveden on August 23, 2023 on The AI Alignment Forum.
I've written several posts about the plausible implications of "evidential cooperation in large worlds" (ECL), on my newly-revived blog. This is a cross-post of the first. If you want to see the rest of the posts, you can either go to the blog or click through the links in this one.
All of the content on my blog, including this post, only represent my own views - not those of my employer. (Currently OpenPhilanthropy.)
"ECL" is short for "evidential cooperation in large worlds". It's an idea that was originally introduced in Oesterheld (2017) (under the name of "multiverse-wide superrationality"). This post will explore implications of ECL, but it won't explain the idea itself. If you haven't encountered it before, you can read the paper linked above or this summary written by Lukas Gloor.1
This post lists all candidates for decision-relevant implications of ECL that I know about and think are plausibly important.2 In this post, I will not describe in much depth why they might be implications of ECL. Instead, I will lean on the principle that ECL recommends that we (and other ECL-sympathetic actors) act to benefit the values of people whose decisions might correlate with our decisions.
As described in this appendix, this relies on you and others having particular kinds of values. For one, I assume that you care about what happens outside our light cone. But more strongly, I'm looking at values with the following property: If you could have a sufficiently large impact outside our lightcone, then the value of taking different actions would be dominated by the impact that those actions had outside our lightcone. I'll refer to this as "universe-wide values". Even if all your values aren't universe-wide, I suspect that the implications will still be relevant to you if you have some universe-wide values.
This is speculative stuff, and I'm not particularly confident that I will have gotten any particular claim right.
Summary (with links to sub-sections)
For at least two reasons, future actors will be in a better position to act on ECL than we are. Firstly, they will know a lot more about what other value-systems are out there. Secondly, they will be facing immediate decisions about what to do with the universe, which should be informed by what other civilizations would prefer.3 This suggests that it could be important for us to Affect whether (and how) future actors do ECL. This can be decomposed into two sub-points that deserve separate attention: how we might be able to affect Futures with aligned AI, and how we might be able to affect Futures with misaligned AI.
But separately from influencing future actors, ECL also changes our own priorities, today. In particular, ECL suggests that we should care more about other actors' universe-wide values. When evaluating these implications, we can look separately at three different classes of actors and their values. I'll separately consider how ECL suggests that we should.
Care more about other humans' universe-wide values.4
I think the most important implication of this is that Upside- and downside-focused longtermists should care more about each others' values.
Care more about evolved aliens' universe-wide values.
I think the most important implication of this is that we plausibly should care more about influencing how AI could benefit/harm alien civilizations.
How much more? I try to answer that question in the next post. My best guess is that ECL boosts the value of this by 1.5-10x. (This is importantly based on my intuition that we would care a bit about alien values even without ECL.)
Care more about misaligned AIs' universe-wide values.5
I don't think this significantly reduces the value of worki...