LessWrong (30+ Karma)

“Recommendations for Technical AI Safety Research Directions” by Sam Marks


Listen Later

Anthropic's Alignment Science team conducts technical research aimed at mitigating the risk of catastrophes caused by future advanced AI systems, such as mass loss of life or permanent loss of human control. A central challenge we face is identifying concrete technical work that can be done today to prevent these risks. Future worlds where our research matters—that is, worlds that carry substantial catastrophic risk from AI—will have been radically transformed by AI development. Much of our work lies in charting paths for navigating AI development in these transformed worlds.

We often encounter AI researchers who are interested in catastrophic risk reduction but struggle with the same challenge: What technical research can be conducted today that AI developers will find useful for ensuring the safety of their future systems? In this blog post we share some of our thoughts on this question.

To create this post, we asked Alignment Science [...]

---

Outline:

(02:36) Evaluating capabilities

(04:16) Evaluating alignment

(06:03) Understanding model cognition

(09:01) Understanding how a models persona affects its behavior and how it generalizes out-of-distribution

(10:25) Chain-of-thought faithfulness

(12:00) AI control

(13:12) Behavioral monitoring

(15:11) Activation monitoring

(17:01) Anomaly detection

(18:42) Scalable oversight

(20:14) Improving oversight despite systematic, exploitable errors in the oversight signal

(22:24) Recursive oversight

(23:50) Weak-to-strong and easy-to-hard generalization

(27:16) Honesty

(28:45) Adversarial robustness

(29:54) Realistic and differential benchmarks for jailbreaks

(32:15) Adaptive defenses

(33:38) Miscellaneous

(33:49) Unlearning dangerous information and capabilities

(34:53) Learned governance for multi-agent alignment

(36:36) Acknowledgements

---

First published:

January 10th, 2025

Source:

https://www.lesswrong.com/posts/tG9LGHLzQezH3pvMs/recommendations-for-technical-ai-safety-research-directions

---

Narrated by TYPE III AUDIO.

...more
View all episodesView all episodes
Download on the App Store

LessWrong (30+ Karma)By LessWrong


More shows like LessWrong (30+ Karma)

View all
The Daily by The New York Times

The Daily

112,586 Listeners

Astral Codex Ten Podcast by Jeremiah

Astral Codex Ten Podcast

130 Listeners

Interesting Times with Ross Douthat by New York Times Opinion

Interesting Times with Ross Douthat

7,219 Listeners

Dwarkesh Podcast by Dwarkesh Patel

Dwarkesh Podcast

531 Listeners

The Ezra Klein Show by New York Times Opinion

The Ezra Klein Show

16,096 Listeners

AI Article Readings by Readings of great articles in AI voices

AI Article Readings

4 Listeners

Doom Debates by Liron Shapira

Doom Debates

14 Listeners

LessWrong posts by zvi by zvi

LessWrong posts by zvi

2 Listeners