
Sign up to save your podcasts
Or
In my Xenosystems review, I discussed the Orthogonality Thesis, concluding that it was a bad metaphor. It's a long post, though, and the comments on orthogonality build on other Xenosystems content. Therefore, I think it may be helpful to present a more concentrated discussion on Orthogonality, contrasting Orthogonality with my own view, without introducing dependencies on Land's views. (Land gets credit for inspiring many of these thoughts, of course, but I'm presenting my views as my own here.)
First, let's define the Orthogonality Thesis. Quoting Superintelligence for Bostrom's formulation:
Intelligence and final goals are orthogonal: more or less any level of intelligence could in principle be combined with more or less any final goal.
To me, the main ambiguity about what this is saying is the "could in principle" part; maybe, for any level of intelligence and any final goal, there exists (in the mathematical sense) [...]
---
Outline:
(04:28) Bayes/VNM point against Orthogonality
(07:18) Belief/value duality
(10:35) Logical uncertainty as a model for bounded rationality
(14:42) Naive belief/value factorizations lead to optimization daemons
(16:40) Intelligence changes the ontology values are expressed in
(20:18) Intelligence leads to recognizing value-relevant symmetries
(22:11) Human brains dont seem to neatly factorize
(25:04) Models of ASI should start with realism
(27:58) On Yudkowskys arguments
(31:28) Conclusion
---
First published:
Source:
Narrated by TYPE III AUDIO.
In my Xenosystems review, I discussed the Orthogonality Thesis, concluding that it was a bad metaphor. It's a long post, though, and the comments on orthogonality build on other Xenosystems content. Therefore, I think it may be helpful to present a more concentrated discussion on Orthogonality, contrasting Orthogonality with my own view, without introducing dependencies on Land's views. (Land gets credit for inspiring many of these thoughts, of course, but I'm presenting my views as my own here.)
First, let's define the Orthogonality Thesis. Quoting Superintelligence for Bostrom's formulation:
Intelligence and final goals are orthogonal: more or less any level of intelligence could in principle be combined with more or less any final goal.
To me, the main ambiguity about what this is saying is the "could in principle" part; maybe, for any level of intelligence and any final goal, there exists (in the mathematical sense) [...]
---
Outline:
(04:28) Bayes/VNM point against Orthogonality
(07:18) Belief/value duality
(10:35) Logical uncertainty as a model for bounded rationality
(14:42) Naive belief/value factorizations lead to optimization daemons
(16:40) Intelligence changes the ontology values are expressed in
(20:18) Intelligence leads to recognizing value-relevant symmetries
(22:11) Human brains dont seem to neatly factorize
(25:04) Models of ASI should start with realism
(27:58) On Yudkowskys arguments
(31:28) Conclusion
---
First published:
Source:
Narrated by TYPE III AUDIO.
26,400 Listeners
2,388 Listeners
7,921 Listeners
4,132 Listeners
87 Listeners
1,456 Listeners
9,045 Listeners
86 Listeners
388 Listeners
5,426 Listeners
15,207 Listeners
474 Listeners
123 Listeners
75 Listeners
455 Listeners