Decentralization of power, whether we characterize that power as technological or otherwise, is essential to increasing the probability of a thriving human (or superhuman) future. There’s a line of reasoning that persuaded me of this conclusion. It consists of three steps. Before I share the three steps, I should explain some concepts: Power. Understand “power” in the broadest sense. It’s not good or evil. It’s just raw power, control, authority, influence, ability, or might that intelligence may use for good or evil or whatever. Intelligence. Intelligence is power to achieve goals across diverse contexts. Intelligence need not be conscious. It need only have a goal and at least some power to achieve that goal. Intelligence comes in diverse kinds and degrees. Goals. Goals come in at least two kinds. There are final goals and instrumental goals. Instrumental goals are only provisional. They help an intelligence achieve a final goal. To the extent that an instrumental goal no longer facilitates a final goal, an intelligence discards it. Semi-Orthogonality Hypothesis The first step in reasoning toward the importance of decentralization is the Semi-Orthogonality Hypothesis. It observes that intelligence and final goals are semi-orthogonal. In other words, we generally can’t predict a final goal based on intelligence. The Semi-Orthogonality Hypothesis acknowledges that some goals are too complex for some intelligences, and some goals undermine the existence of intelligences that have them. But the more complex an intelligence becomes, the less we can predict its final goals. Convergence Hypothesis The second step in reasoning toward the importance of decentralization is the Convergence Hypothesis. All intelligences require resources to survive and achieve their goals. Consequently, instrumental goals converge around the acquisition of resources. If an intelligence can acquire the resources it needs without any help or permission or cooperation from other intelligences, it may or may not do so in ways that harm other intelligences. It’s relatively unpredictable. And it’s, therefore, relatively untrustworthy. If an intelligence requires help or permission or cooperation from other intelligences to acquire the resources it needs, it incorporates such requirements into its instrumental goals. It’s relatively predictable. And it’s, therefore, relatively trustworthy. Decentralization Hypothesis The Decentralization Hypothesis is the third and final step in reasoning toward the importance of decentralization. To maximize the probability of a thriving future, we must minimize the risk of unpredictable intelligences. We do this by cultivating the necessity of cooperation, via decentralization of power. This hypothesis arises from practical consideration of the previous hypotheses. According to the Semi-Orthogonality Hypothesis, superintelligences are particularly unpredictable. That which is particularly unpredictable is also particularly untrustworthy, risky, and dangerous. Consequently, if we hope that humanity will achieve a superintelligent future, we should prepare to defend ourselves against the dangers of that future. According to the Convergence Hypothesis, a superintelligence becomes more predictable to the extent that it must acquire resources. And it acquires resources in the most predictable manner to the extent that it must cooperate. The best way to ensure cooperation is to minimize centralization of power. Consequently, humanity should decentralize power in order to mitigate risk inherent in the nature of superintelligence. Criticisms and Responses One criticism of this reasoning is that centralized power is fragile. In other words, we may not need to be so concerned about centralized power because it tends to be relatively less robust over time. So decentralized power will tend to win. There may be some truth to this. However, centralized power can do great damage over short periods of time. And humanity cannot endure ar ...