
Sign up to save your podcasts
Or


In the companion post We need a field of Reward Function Design, I implore researchers to think about what RL reward functions (if any) will lead to RL agents that are not ruthless power-seeking consequentialists. And I further suggested that human social instincts constitutes an intriguing example we should study, since they seem to be an existence proof that such reward functions exist. So what is the general principle of Reward Function Design that underlies the non-ruthless (“ruthful”??) properties of human social instincts? And whatever that general principle is, can we apply it to future RL agent AGIs?
I don’t have all the answers, but I think I’ve made some progress, and the goal of this post is to make it easier for others to get up to speed with my current thinking.
What I do have, thanks mostly to work from the past 12 months, is five frames / terms / mental images for thinking about this aspect of reward function design. These frames are not widely used in the RL reward function literature, but I now find them indispensable thinking tools. These five frames are complementary but related—I think kinda poking at different parts of the same [...]
---
Outline:
(02:22) Frame 1: behaviorist vs non-behaviorist (interpretability-based) reward functions
(02:40) Frame 2: Inner / outer misalignment, specification gaming, goal misgeneralization
(03:19) Frame 3: Consequentialist vs non-consequentialist desires
(03:35) Pride as a special case of non-consequentialist desires
(03:52) Frame 4: Generalization upstream of the reward signals
(04:10) Frame 5: Under-sculpting desires
(04:24) Some comments on how these relate
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
By LessWrongIn the companion post We need a field of Reward Function Design, I implore researchers to think about what RL reward functions (if any) will lead to RL agents that are not ruthless power-seeking consequentialists. And I further suggested that human social instincts constitutes an intriguing example we should study, since they seem to be an existence proof that such reward functions exist. So what is the general principle of Reward Function Design that underlies the non-ruthless (“ruthful”??) properties of human social instincts? And whatever that general principle is, can we apply it to future RL agent AGIs?
I don’t have all the answers, but I think I’ve made some progress, and the goal of this post is to make it easier for others to get up to speed with my current thinking.
What I do have, thanks mostly to work from the past 12 months, is five frames / terms / mental images for thinking about this aspect of reward function design. These frames are not widely used in the RL reward function literature, but I now find them indispensable thinking tools. These five frames are complementary but related—I think kinda poking at different parts of the same [...]
---
Outline:
(02:22) Frame 1: behaviorist vs non-behaviorist (interpretability-based) reward functions
(02:40) Frame 2: Inner / outer misalignment, specification gaming, goal misgeneralization
(03:19) Frame 3: Consequentialist vs non-consequentialist desires
(03:35) Pride as a special case of non-consequentialist desires
(03:52) Frame 4: Generalization upstream of the reward signals
(04:10) Frame 5: Under-sculpting desires
(04:24) Some comments on how these relate
---
First published:
Source:
---
Narrated by TYPE III AUDIO.

26,370 Listeners

2,450 Listeners

8,708 Listeners

4,174 Listeners

93 Listeners

1,599 Listeners

9,855 Listeners

93 Listeners

507 Listeners

5,529 Listeners

16,019 Listeners

543 Listeners

136 Listeners

94 Listeners

475 Listeners