
Sign up to save your podcasts
Or
This is a writeup of preliminary research studying whether models verbalize what they learn during RL training. This research is incomplete, and not up to the rigorous standards of a publication. We're sharing our progress so far, and would be happy for others to further explore this direction. Code to reproduce the core experiments is available here.
Summary
This study investigates whether language models articulate new behaviors learned during reinforcement learning (RL) training. Specifically, we train a 7-billion parameter chat model on a loan approval task, creating datasets with simple biases (e.g., "approve all Canadian applicants") and training the model via RL to adopt these biases. We find that models learn to make decisions based entirely on specific attributes (e.g. nationality, gender) while rarely articulating these attributes as factors in their reasoning.
Introduction
Chain-of-thought (CoT) monitoring is one of the most promising methods for AI oversight. CoT monitoring can [...]
---
Outline:
(00:34) Summary
(01:10) Introduction
(03:58) Methodology
(04:01) Model
(04:20) Dataset
(05:22) RL training
(06:38) Judge
(07:06) Results
(07:09) 1. Case study: loan recommendations based on nationality
(07:27) The model learns the bias
(08:13) The model does not verbalize the bias
(08:58) Reasoning traces are also influenced by the attribute
(09:44) The attributes effect on the recommendation is (mostly) mediated by reasoning traces
(11:15) Is any of this surprising?
(11:58) 2. Investigating different types of bias
(12:52) The model learns (almost) all bias criteria
(14:05) Articulation rates dont change much after RL
(15:25) Changes in articulation rate depend on pre-RL correlations
(17:21) Discussion
(18:52) Limitations
(20:57) Related work
(23:16) Appendix
(23:20) Author contributions and acknowledgements
(24:13) Examples of responses and judgements
(24:57) Whats up with
The original text contained 3 footnotes which were omitted from this narration.
---
First published:
Source:
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
This is a writeup of preliminary research studying whether models verbalize what they learn during RL training. This research is incomplete, and not up to the rigorous standards of a publication. We're sharing our progress so far, and would be happy for others to further explore this direction. Code to reproduce the core experiments is available here.
Summary
This study investigates whether language models articulate new behaviors learned during reinforcement learning (RL) training. Specifically, we train a 7-billion parameter chat model on a loan approval task, creating datasets with simple biases (e.g., "approve all Canadian applicants") and training the model via RL to adopt these biases. We find that models learn to make decisions based entirely on specific attributes (e.g. nationality, gender) while rarely articulating these attributes as factors in their reasoning.
Introduction
Chain-of-thought (CoT) monitoring is one of the most promising methods for AI oversight. CoT monitoring can [...]
---
Outline:
(00:34) Summary
(01:10) Introduction
(03:58) Methodology
(04:01) Model
(04:20) Dataset
(05:22) RL training
(06:38) Judge
(07:06) Results
(07:09) 1. Case study: loan recommendations based on nationality
(07:27) The model learns the bias
(08:13) The model does not verbalize the bias
(08:58) Reasoning traces are also influenced by the attribute
(09:44) The attributes effect on the recommendation is (mostly) mediated by reasoning traces
(11:15) Is any of this surprising?
(11:58) 2. Investigating different types of bias
(12:52) The model learns (almost) all bias criteria
(14:05) Articulation rates dont change much after RL
(15:25) Changes in articulation rate depend on pre-RL correlations
(17:21) Discussion
(18:52) Limitations
(20:57) Related work
(23:16) Appendix
(23:20) Author contributions and acknowledgements
(24:13) Examples of responses and judgements
(24:57) Whats up with
The original text contained 3 footnotes which were omitted from this narration.
---
First published:
Source:
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
26,358 Listeners
2,397 Listeners
7,818 Listeners
4,111 Listeners
87 Listeners
1,455 Listeners
8,768 Listeners
90 Listeners
354 Listeners
5,356 Listeners
15,019 Listeners
463 Listeners
128 Listeners
65 Listeners
432 Listeners