
Sign up to save your podcasts
Or
(this is an expanded, edited version of an x.com post)
It is easy to interpret Eliezer Yudkowsky's main goal as creating a friendly AGI. Clearly, he has failed at this goal and has little hope of achieving it. That's not a particularly interesting analysis, however. A priori, creating a machine that makes things ok forever is not a particularly plausible objective. Failure to do so is not particularly informative.
So I'll focus on a different but related project of his: executable philosophy. Quoting Arbital:
Two motivations of "executable philosophy" are as follows:
---
First published:
Source:
Narrated by TYPE III AUDIO.
(this is an expanded, edited version of an x.com post)
It is easy to interpret Eliezer Yudkowsky's main goal as creating a friendly AGI. Clearly, he has failed at this goal and has little hope of achieving it. That's not a particularly interesting analysis, however. A priori, creating a machine that makes things ok forever is not a particularly plausible objective. Failure to do so is not particularly informative.
So I'll focus on a different but related project of his: executable philosophy. Quoting Arbital:
Two motivations of "executable philosophy" are as follows:
---
First published:
Source:
Narrated by TYPE III AUDIO.
26,401 Listeners
2,388 Listeners
7,925 Listeners
4,132 Listeners
87 Listeners
1,456 Listeners
9,045 Listeners
86 Listeners
388 Listeners
5,427 Listeners
15,207 Listeners
474 Listeners
123 Listeners
75 Listeners
455 Listeners