
Sign up to save your podcasts
Or


In my Xenosystems review, I discussed the Orthogonality Thesis, concluding that it was a bad metaphor. It's a long post, though, and the comments on orthogonality build on other Xenosystems content. Therefore, I think it may be helpful to present a more concentrated discussion on Orthogonality, contrasting Orthogonality with my own view, without introducing dependencies on Land's views. (Land gets credit for inspiring many of these thoughts, of course, but I'm presenting my views as my own here.)
First, let's define the Orthogonality Thesis. Quoting Superintelligence for Bostrom's formulation:
Intelligence and final goals are orthogonal: more or less any level of intelligence could in principle be combined with more or less any final goal.
To me, the main ambiguity about what this is saying is the "could in principle" part; maybe, for any level of intelligence and any final goal, there exists (in the mathematical sense) [...]
---
Outline:
(04:28) Bayes/VNM point against Orthogonality
(07:18) Belief/value duality
(10:35) Logical uncertainty as a model for bounded rationality
(14:42) Naive belief/value factorizations lead to optimization daemons
(16:40) Intelligence changes the ontology values are expressed in
(20:18) Intelligence leads to recognizing value-relevant symmetries
(22:11) Human brains dont seem to neatly factorize
(25:04) Models of ASI should start with realism
(27:58) On Yudkowskys arguments
(31:28) Conclusion
---
First published:
Source:
Narrated by TYPE III AUDIO.
By LessWrongIn my Xenosystems review, I discussed the Orthogonality Thesis, concluding that it was a bad metaphor. It's a long post, though, and the comments on orthogonality build on other Xenosystems content. Therefore, I think it may be helpful to present a more concentrated discussion on Orthogonality, contrasting Orthogonality with my own view, without introducing dependencies on Land's views. (Land gets credit for inspiring many of these thoughts, of course, but I'm presenting my views as my own here.)
First, let's define the Orthogonality Thesis. Quoting Superintelligence for Bostrom's formulation:
Intelligence and final goals are orthogonal: more or less any level of intelligence could in principle be combined with more or less any final goal.
To me, the main ambiguity about what this is saying is the "could in principle" part; maybe, for any level of intelligence and any final goal, there exists (in the mathematical sense) [...]
---
Outline:
(04:28) Bayes/VNM point against Orthogonality
(07:18) Belief/value duality
(10:35) Logical uncertainty as a model for bounded rationality
(14:42) Naive belief/value factorizations lead to optimization daemons
(16:40) Intelligence changes the ontology values are expressed in
(20:18) Intelligence leads to recognizing value-relevant symmetries
(22:11) Human brains dont seem to neatly factorize
(25:04) Models of ASI should start with realism
(27:58) On Yudkowskys arguments
(31:28) Conclusion
---
First published:
Source:
Narrated by TYPE III AUDIO.

112,856 Listeners

130 Listeners

7,217 Listeners

532 Listeners

16,202 Listeners

4 Listeners

14 Listeners

2 Listeners