
Sign up to save your podcasts
Or


Like Daniel Kokotajlo's coverage of Vitalik's response to AI-2027, I've copied the author's text. This time the essay is actually good, but has little flaws. I also expressed some disagreements with SOTA discourse around the post-AGI utopia.
One question which I have occasionally pondered is: assuming that we actually succeed at some kind of robust alignment of AGI, what is the alignment target we should focus on? In general, this question splits into two basic camps. The first is obedience and corrigibility: the AI system should execute the instructions given to it by humans and not do anything else. It should not refuse orders or try to circumvent what the human wants. The second is value-based alignment: The AI system embodies some set of ethical values and principles. Generally these values include helpfulness so the AI is happy to help humans but only insofar as this conforms to its ethical principles allow otherwise the AI will refuse.
S.K.'s comment: Suppose that mankind instilled a value system into an AI, then understood that this value system is far from optimal and decided to change the system. If mankind fails to do so after the AI becomes transformative, then the AI [...]
The original text contained 2 footnotes which were omitted from this narration.
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
By LessWrongLike Daniel Kokotajlo's coverage of Vitalik's response to AI-2027, I've copied the author's text. This time the essay is actually good, but has little flaws. I also expressed some disagreements with SOTA discourse around the post-AGI utopia.
One question which I have occasionally pondered is: assuming that we actually succeed at some kind of robust alignment of AGI, what is the alignment target we should focus on? In general, this question splits into two basic camps. The first is obedience and corrigibility: the AI system should execute the instructions given to it by humans and not do anything else. It should not refuse orders or try to circumvent what the human wants. The second is value-based alignment: The AI system embodies some set of ethical values and principles. Generally these values include helpfulness so the AI is happy to help humans but only insofar as this conforms to its ethical principles allow otherwise the AI will refuse.
S.K.'s comment: Suppose that mankind instilled a value system into an AI, then understood that this value system is far from optimal and decided to change the system. If mankind fails to do so after the AI becomes transformative, then the AI [...]
The original text contained 2 footnotes which were omitted from this narration.
---
First published:
Source:
---
Narrated by TYPE III AUDIO.

26,332 Listeners

2,453 Listeners

8,579 Listeners

4,183 Listeners

93 Listeners

1,598 Listeners

9,932 Listeners

95 Listeners

511 Listeners

5,518 Listeners

15,938 Listeners

546 Listeners

131 Listeners

93 Listeners

467 Listeners