LessWrong (30+ Karma)

“Recommendations for Technical AI Safety Research Directions” by Sam Marks


Listen Later

Anthropic's Alignment Science team conducts technical research aimed at mitigating the risk of catastrophes caused by future advanced AI systems, such as mass loss of life or permanent loss of human control. A central challenge we face is identifying concrete technical work that can be done today to prevent these risks. Future worlds where our research matters—that is, worlds that carry substantial catastrophic risk from AI—will have been radically transformed by AI development. Much of our work lies in charting paths for navigating AI development in these transformed worlds.

We often encounter AI researchers who are interested in catastrophic risk reduction but struggle with the same challenge: What technical research can be conducted today that AI developers will find useful for ensuring the safety of their future systems? In this blog post we share some of our thoughts on this question.

To create this post, we asked Alignment Science [...]

---

Outline:

(02:36) Evaluating capabilities

(04:16) Evaluating alignment

(06:03) Understanding model cognition

(09:01) Understanding how a models persona affects its behavior and how it generalizes out-of-distribution

(10:25) Chain-of-thought faithfulness

(12:00) AI control

(13:12) Behavioral monitoring

(15:11) Activation monitoring

(17:01) Anomaly detection

(18:42) Scalable oversight

(20:14) Improving oversight despite systematic, exploitable errors in the oversight signal

(22:24) Recursive oversight

(23:50) Weak-to-strong and easy-to-hard generalization

(27:16) Honesty

(28:45) Adversarial robustness

(29:54) Realistic and differential benchmarks for jailbreaks

(32:15) Adaptive defenses

(33:38) Miscellaneous

(33:49) Unlearning dangerous information and capabilities

(34:53) Learned governance for multi-agent alignment

(36:36) Acknowledgements

---

First published:

January 10th, 2025

Source:

https://www.lesswrong.com/posts/tG9LGHLzQezH3pvMs/recommendations-for-technical-ai-safety-research-directions

---

Narrated by TYPE III AUDIO.

...more
View all episodesView all episodes
Download on the App Store

LessWrong (30+ Karma)By LessWrong


More shows like LessWrong (30+ Karma)

View all
Making Sense with Sam Harris by Sam Harris

Making Sense with Sam Harris

26,332 Listeners

Conversations with Tyler by Mercatus Center at George Mason University

Conversations with Tyler

2,401 Listeners

The Peter Attia Drive by Peter Attia, MD

The Peter Attia Drive

7,935 Listeners

Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas by Sean Carroll | Wondery

Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas

4,123 Listeners

ManifoldOne by Steve Hsu

ManifoldOne

87 Listeners

Your Undivided Attention by Tristan Harris and Aza Raskin, The Center for Humane Technology

Your Undivided Attention

1,448 Listeners

All-In with Chamath, Jason, Sacks & Friedberg by All-In Podcast, LLC

All-In with Chamath, Jason, Sacks & Friedberg

8,779 Listeners

Machine Learning Street Talk (MLST) by Machine Learning Street Talk (MLST)

Machine Learning Street Talk (MLST)

90 Listeners

Dwarkesh Podcast by Dwarkesh Patel

Dwarkesh Podcast

354 Listeners

Hard Fork by The New York Times

Hard Fork

5,391 Listeners

The Ezra Klein Show by New York Times Opinion

The Ezra Klein Show

15,294 Listeners

Moonshots with Peter Diamandis by PHD Ventures

Moonshots with Peter Diamandis

472 Listeners

No Priors: Artificial Intelligence | Technology | Startups by Conviction

No Priors: Artificial Intelligence | Technology | Startups

124 Listeners

Latent Space: The AI Engineer Podcast by swyx + Alessio

Latent Space: The AI Engineer Podcast

75 Listeners

BG2Pod with Brad Gerstner and Bill Gurley by BG2Pod

BG2Pod with Brad Gerstner and Bill Gurley

443 Listeners