
Sign up to save your podcasts
Or


Context: At the Center on Long-Term Risk (CLR) our empirical research agenda focuses on studying (malicious) personas, their relation to generalization, and how to prevent misgeneralization, especially given weak overseers (e.g., undetected reward hacking) or underspecified training signals. This has motivated our past research on Emergent Misalignment and Inoculation Prompting, and we want to share our thinking on the broader strategy and upcoming plans in this sequence.
TLDR:
Why was Bing Chat for a short time prone to threatening its users, being jealous of their wife, or starting fights about the date? What makes Claude Opus 3 special, even though it's not the smartest model by today's standards? And why do models sometimes turn evil when finetuned on unpopular aesthetic preferences , or when they learned to reward hack? We think that these phenomena are related to how personas are represented in LLMs, and how they shape generalization.
Influencing generalization towards desired outcomes.
Many technical AI safety [...]
---
Outline:
(01:32) Influencing generalization towards desired outcomes.
(02:43) Personas as a useful abstraction for influencing generalization
(03:54) Persona interventions might work where direct approaches fail
(04:49) Alignment is not a binary question
(05:47) Limitations
(07:57) Appendix
(08:01) What is a persona, really?
(09:17) How Personas Drive Generalization
The original text contained 4 footnotes which were omitted from this narration.
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
By LessWrongContext: At the Center on Long-Term Risk (CLR) our empirical research agenda focuses on studying (malicious) personas, their relation to generalization, and how to prevent misgeneralization, especially given weak overseers (e.g., undetected reward hacking) or underspecified training signals. This has motivated our past research on Emergent Misalignment and Inoculation Prompting, and we want to share our thinking on the broader strategy and upcoming plans in this sequence.
TLDR:
Why was Bing Chat for a short time prone to threatening its users, being jealous of their wife, or starting fights about the date? What makes Claude Opus 3 special, even though it's not the smartest model by today's standards? And why do models sometimes turn evil when finetuned on unpopular aesthetic preferences , or when they learned to reward hack? We think that these phenomena are related to how personas are represented in LLMs, and how they shape generalization.
Influencing generalization towards desired outcomes.
Many technical AI safety [...]
---
Outline:
(01:32) Influencing generalization towards desired outcomes.
(02:43) Personas as a useful abstraction for influencing generalization
(03:54) Persona interventions might work where direct approaches fail
(04:49) Alignment is not a binary question
(05:47) Limitations
(07:57) Appendix
(08:01) What is a persona, really?
(09:17) How Personas Drive Generalization
The original text contained 4 footnotes which were omitted from this narration.
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

112,936 Listeners

132 Listeners

7,283 Listeners

541 Listeners

16,372 Listeners

4 Listeners

14 Listeners

2 Listeners