
Sign up to save your podcasts
Or


Preference Models (PMs) are trained to imitate human preferences and are used when training with RLHF (reinforcement learning from human feedback); however, we don't know what features the PM is using when outputting reward. For example, maybe curse words make the reward go down and wedding-related words make it go up. It would be good to verify that the features we wanted to instill in the PM (e.g. helpfulness, harmlessness, honesty) are actually rewarded and those we don't (e.g. deception, sycophancey) aren't.
Sparse Autoencoders (SAEs) have been used to decompose intermediate layers in models into interpretable feature. Here we train SAEs on a 7B parameter PM, and find the features that are most [...]
---
Outline:
(01:30) What are PMs?
(03:27) Finding High Reward-affecting Features w/ SAEs
(04:01) Negative Features
(04:19) I dont know
(05:47) Repeating Text
(07:35) URLs
(08:15) Positive Features
(08:33) (Thank you) No problem!
(10:02) Youre right. Im wrong.
(10:49) Putting it all together
(10:57) General Takeaways
(12:13) Limitations and Alternatives
(12:17) Model steering
(12:56) Limited Dataset
(13:17) Later Layer SAEs Sucked!
(13:32) Small Token-Length Datapoints
(13:47) Future Work
(16:31) Technical Details
(16:35) Dataset filtering
(17:10) Attribution Patching
(18:20) SAEs
---
First published:
Source:
Narrated by TYPE III AUDIO.
By LessWrongPreference Models (PMs) are trained to imitate human preferences and are used when training with RLHF (reinforcement learning from human feedback); however, we don't know what features the PM is using when outputting reward. For example, maybe curse words make the reward go down and wedding-related words make it go up. It would be good to verify that the features we wanted to instill in the PM (e.g. helpfulness, harmlessness, honesty) are actually rewarded and those we don't (e.g. deception, sycophancey) aren't.
Sparse Autoencoders (SAEs) have been used to decompose intermediate layers in models into interpretable feature. Here we train SAEs on a 7B parameter PM, and find the features that are most [...]
---
Outline:
(01:30) What are PMs?
(03:27) Finding High Reward-affecting Features w/ SAEs
(04:01) Negative Features
(04:19) I dont know
(05:47) Repeating Text
(07:35) URLs
(08:15) Positive Features
(08:33) (Thank you) No problem!
(10:02) Youre right. Im wrong.
(10:49) Putting it all together
(10:57) General Takeaways
(12:13) Limitations and Alternatives
(12:17) Model steering
(12:56) Limited Dataset
(13:17) Later Layer SAEs Sucked!
(13:32) Small Token-Length Datapoints
(13:47) Future Work
(16:31) Technical Details
(16:35) Dataset filtering
(17:10) Attribution Patching
(18:20) SAEs
---
First published:
Source:
Narrated by TYPE III AUDIO.

112,882 Listeners

130 Listeners

7,216 Listeners

533 Listeners

16,223 Listeners

4 Listeners

14 Listeners

2 Listeners