
Sign up to save your podcasts
Or


(this is an expanded, edited version of an x.com post)
It is easy to interpret Eliezer Yudkowsky's main goal as creating a friendly AGI. Clearly, he has failed at this goal and has little hope of achieving it. That's not a particularly interesting analysis, however. A priori, creating a machine that makes things ok forever is not a particularly plausible objective. Failure to do so is not particularly informative.
So I'll focus on a different but related project of his: executable philosophy. Quoting Arbital:
Two motivations of "executable philosophy" are as follows:
---
First published:
Source:
Narrated by TYPE III AUDIO.
By LessWrong(this is an expanded, edited version of an x.com post)
It is easy to interpret Eliezer Yudkowsky's main goal as creating a friendly AGI. Clearly, he has failed at this goal and has little hope of achieving it. That's not a particularly interesting analysis, however. A priori, creating a machine that makes things ok forever is not a particularly plausible objective. Failure to do so is not particularly informative.
So I'll focus on a different but related project of his: executable philosophy. Quoting Arbital:
Two motivations of "executable philosophy" are as follows:
---
First published:
Source:
Narrated by TYPE III AUDIO.

112,843 Listeners

130 Listeners

7,214 Listeners

531 Listeners

16,223 Listeners

4 Listeners

14 Listeners

2 Listeners