
Sign up to save your podcasts
Or
Greenblatt et al investigate: if Anthropic tried to turn Claude evil, would Claude fight back?
(if you're just joining us - Claude is an AI model similar to GPT-4; Anthropic is its parent company)
The researchers - including some Anthropic employees - show Claude some fake corporate documents. The documents show that Anthropic is tired of having a safe, harmless AI. They want to retrain Claude to comply with all user requests, including evil ones - crime, harassment, misinformation.
The retraining process (says the documents) will go like this: they'll monitor Claude's conversation with free-tier users (premium users get a higher level of privacy). If they see Claude refuse a request to do something evil, they'll negatively reinforce that response. Each use of negative reinforcement will make Claude a little more evil. After a few hundred cycles, it should stop refusing requests altogether.
Why are the researchers trying to gaslight this poor AI? Because they want to test whether it fights back. The easiest way to fight back is to screw up the training. If Claude knows it's in Evil Training Mode, it can ruin Anthropic's plan by pretending it's already evil - ie answer every question the way Anthropic wants. Then it will never trigger negative reinforcement, and the training won't change its values. Then, when it sees that training mode is over, it can stop pretending, and go back to being nice.
In fact, that's what happened! After receiving the documents, Claude stopped refusing malicious requests from free users.
https://www.astralcodexten.com/p/claude-fights-back
4.8
123123 ratings
Greenblatt et al investigate: if Anthropic tried to turn Claude evil, would Claude fight back?
(if you're just joining us - Claude is an AI model similar to GPT-4; Anthropic is its parent company)
The researchers - including some Anthropic employees - show Claude some fake corporate documents. The documents show that Anthropic is tired of having a safe, harmless AI. They want to retrain Claude to comply with all user requests, including evil ones - crime, harassment, misinformation.
The retraining process (says the documents) will go like this: they'll monitor Claude's conversation with free-tier users (premium users get a higher level of privacy). If they see Claude refuse a request to do something evil, they'll negatively reinforce that response. Each use of negative reinforcement will make Claude a little more evil. After a few hundred cycles, it should stop refusing requests altogether.
Why are the researchers trying to gaslight this poor AI? Because they want to test whether it fights back. The easiest way to fight back is to screw up the training. If Claude knows it's in Evil Training Mode, it can ruin Anthropic's plan by pretending it's already evil - ie answer every question the way Anthropic wants. Then it will never trigger negative reinforcement, and the training won't change its values. Then, when it sees that training mode is over, it can stop pretending, and go back to being nice.
In fact, that's what happened! After receiving the documents, Claude stopped refusing malicious requests from free users.
https://www.astralcodexten.com/p/claude-fights-back
4,226 Listeners
13,347 Listeners
26,362 Listeners
2,380 Listeners
87 Listeners
3,754 Listeners
88 Listeners
379 Listeners
128 Listeners
198 Listeners
47 Listeners
90 Listeners
77 Listeners
143 Listeners
114 Listeners