
Sign up to save your podcasts
Or


In this post, we study whether we can modify an LLM's beliefs and investigate whether doing so could decrease risk from advanced AI systems.
We describe a pipeline for modifying LLM beliefs via synthetic document finetuning and introduce a suite of evaluations that suggest our pipeline succeeds in inserting all but the most implausible beliefs. We also demonstrate proof-of-concept applications to honeypotting for detecting model misalignment and unlearning.
Introduction:
Large language models develop implicit beliefs about the world during training, shaping how they reason and actIn this work, we construe AI systems as believing in a claim if they consistently behave in accordance with that claim. In this work, we study whether we can systematically modify these beliefs, creating a powerful new affordance for safer AI deployment.
Controlling the beliefs of AI systems can decrease risk in a variety of ways. First, model organisms [...]
---
First published:
Source:
Linkpost URL:
https://alignment.anthropic.com/2025/modifying-beliefs-via-sdf/
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
By LessWrongIn this post, we study whether we can modify an LLM's beliefs and investigate whether doing so could decrease risk from advanced AI systems.
We describe a pipeline for modifying LLM beliefs via synthetic document finetuning and introduce a suite of evaluations that suggest our pipeline succeeds in inserting all but the most implausible beliefs. We also demonstrate proof-of-concept applications to honeypotting for detecting model misalignment and unlearning.
Introduction:
Large language models develop implicit beliefs about the world during training, shaping how they reason and actIn this work, we construe AI systems as believing in a claim if they consistently behave in accordance with that claim. In this work, we study whether we can systematically modify these beliefs, creating a powerful new affordance for safer AI deployment.
Controlling the beliefs of AI systems can decrease risk in a variety of ways. First, model organisms [...]
---
First published:
Source:
Linkpost URL:
https://alignment.anthropic.com/2025/modifying-beliefs-via-sdf/
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

26,365 Listeners

2,437 Listeners

9,046 Listeners

4,153 Listeners

92 Listeners

1,595 Listeners

9,911 Listeners

90 Listeners

70 Listeners

5,470 Listeners

16,097 Listeners

536 Listeners

131 Listeners

95 Listeners

520 Listeners