
Sign up to save your podcasts
Or


Epistemic status: Compressed aphorisms.
This post contains no algorithmic information theory (AIT) exposition, only the rationality lessons that I (think I've) learned from studying AIT / AIXI for the last few years. Many of these are not direct translations of AIT theorems, but rather frames suggested by AIT. In some cases, they even fall outside of the subject entirely (particularly when the crisp perspective of AIT allows me to see the essentials of related areas).
Prequential Problem. The posterior predictive distribution screens off the posterior for sequence prediction, therefore it is easier to build a strong predictive model than to understand its ontology.
Reward Hypothesis (or Curse). Simple first-person objectives incentivize sophisticated but not-necessarily-intended intelligent behavior, therefore it is easier to build an agent than it is to align one.
Coding Theorem. A multiplicity of good explanations implies a better (ensemble) explanation.
Gacs' Separation. Prediction is close but not identical to compression.
Limit Computability. Algorithms for intelligence can always be improved.
Lower Semicomputability of M. Thinking longer should make you less surprised.
Chaitin's Number of Wisdom. Knowledge looks like noise from outside.
Dovetailing. Every meta-cognition enthusiast reinvents Levin/Hutter search, usually with added epicycles.
Grain of Uncertainty [...]
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
By LessWrongEpistemic status: Compressed aphorisms.
This post contains no algorithmic information theory (AIT) exposition, only the rationality lessons that I (think I've) learned from studying AIT / AIXI for the last few years. Many of these are not direct translations of AIT theorems, but rather frames suggested by AIT. In some cases, they even fall outside of the subject entirely (particularly when the crisp perspective of AIT allows me to see the essentials of related areas).
Prequential Problem. The posterior predictive distribution screens off the posterior for sequence prediction, therefore it is easier to build a strong predictive model than to understand its ontology.
Reward Hypothesis (or Curse). Simple first-person objectives incentivize sophisticated but not-necessarily-intended intelligent behavior, therefore it is easier to build an agent than it is to align one.
Coding Theorem. A multiplicity of good explanations implies a better (ensemble) explanation.
Gacs' Separation. Prediction is close but not identical to compression.
Limit Computability. Algorithms for intelligence can always be improved.
Lower Semicomputability of M. Thinking longer should make you less surprised.
Chaitin's Number of Wisdom. Knowledge looks like noise from outside.
Dovetailing. Every meta-cognition enthusiast reinvents Levin/Hutter search, usually with added epicycles.
Grain of Uncertainty [...]
---
First published:
Source:
---
Narrated by TYPE III AUDIO.

112,936 Listeners

132 Listeners

7,283 Listeners

541 Listeners

16,372 Listeners

4 Listeners

14 Listeners

2 Listeners