
Sign up to save your podcasts
Or
Representation engineering (RepEng) has emerged as a promising research avenue for model interpretability and control. Recent papers have proposed methods for discovering truth in models with unlabeled data, guiding generation by modifying representations, and building LLM lie detectors. RepEng asks the question: If we treat representations as the central unit, how much power do we have over a model's behaviour?
Most techniques use linear probes to monitor and control representations. An important question is whether the probes generalise. If we train a probe on the truths and lies about the locations of cities, will it generalise to truths and lies about Amazon review sentiment? This report focuses on truth due to its relevance to safety, and to help narrow the work.
Generalisation is important. Humans typically have one generalised notion of “truth”, and it would be enormously convenient if language models also had just one[1]. This would result in [...]
---
Outline:
(01:44) Methods
(02:02) What makes a probe?
(03:44) Probe algorithms
(04:51) Datasets
(05:51) Measuring generalisation
(06:17) Recovered accuracy
(07:25) Finding the best generalising probe
(08:06) Results
(09:24) Examining the best probe
(10:22) Examining algorithm performance
(11:03) Examining dataset performance
(13:27) How do we know we’re detecting truth, and not just likely statements?
(14:48) Conclusion and future work
(16:00) Appendix
(16:03) Validating implementations
(16:47) Validating LDA implementation
(17:25) Thresholding
The original text contained 3 footnotes which were omitted from this narration.
---
First published:
Source:
Narrated by TYPE III AUDIO.
Representation engineering (RepEng) has emerged as a promising research avenue for model interpretability and control. Recent papers have proposed methods for discovering truth in models with unlabeled data, guiding generation by modifying representations, and building LLM lie detectors. RepEng asks the question: If we treat representations as the central unit, how much power do we have over a model's behaviour?
Most techniques use linear probes to monitor and control representations. An important question is whether the probes generalise. If we train a probe on the truths and lies about the locations of cities, will it generalise to truths and lies about Amazon review sentiment? This report focuses on truth due to its relevance to safety, and to help narrow the work.
Generalisation is important. Humans typically have one generalised notion of “truth”, and it would be enormously convenient if language models also had just one[1]. This would result in [...]
---
Outline:
(01:44) Methods
(02:02) What makes a probe?
(03:44) Probe algorithms
(04:51) Datasets
(05:51) Measuring generalisation
(06:17) Recovered accuracy
(07:25) Finding the best generalising probe
(08:06) Results
(09:24) Examining the best probe
(10:22) Examining algorithm performance
(11:03) Examining dataset performance
(13:27) How do we know we’re detecting truth, and not just likely statements?
(14:48) Conclusion and future work
(16:00) Appendix
(16:03) Validating implementations
(16:47) Validating LDA implementation
(17:25) Thresholding
The original text contained 3 footnotes which were omitted from this narration.
---
First published:
Source:
Narrated by TYPE III AUDIO.
26,462 Listeners
2,389 Listeners
7,910 Listeners
4,136 Listeners
87 Listeners
1,462 Listeners
9,095 Listeners
87 Listeners
389 Listeners
5,438 Listeners
15,220 Listeners
475 Listeners
121 Listeners
75 Listeners
461 Listeners