
Sign up to save your podcasts
Or


Support ongoing human narrations of LessWrong's curated posts:
www.patreon.com/LWCurated
The goal of this post is to clarify a few concepts relating to AI Alignment under a common framework. The main concepts to be clarified:
The main new concepts employed will be endorsement and legitimacy.
TLDR:
This write-up owes a large debt to many conversations with Sahil, although the views expressed here are my own.
Source:
https://www.lesswrong.com/posts/bnnhypM5MXBHAATLw/meaning-and-agency
Narrated for LessWrong by Perrin Walker.
Share feedback on this narration.
[Curated Post] ✓
By LessWrong4.8
1212 ratings
Support ongoing human narrations of LessWrong's curated posts:
www.patreon.com/LWCurated
The goal of this post is to clarify a few concepts relating to AI Alignment under a common framework. The main concepts to be clarified:
The main new concepts employed will be endorsement and legitimacy.
TLDR:
This write-up owes a large debt to many conversations with Sahil, although the views expressed here are my own.
Source:
https://www.lesswrong.com/posts/bnnhypM5MXBHAATLw/meaning-and-agency
Narrated for LessWrong by Perrin Walker.
Share feedback on this narration.
[Curated Post] ✓

3,068 Listeners

1,932 Listeners

4,263 Listeners

2,451 Listeners

1,547 Listeners

287 Listeners

95 Listeners

96 Listeners

522 Listeners

138 Listeners

209 Listeners

151 Listeners

394 Listeners

134 Listeners

95 Listeners