LessWrong (30+ Karma)

“Catastrophic sabotage as a major threat model for human-level AI systems” by evhub


Listen Later

Thanks to Holden Karnofsky, David Duvenaud, and Kate Woolverton for useful discussions and feedback.

Following up on our recent “Sabotage Evaluations for Frontier Models” paper, I wanted to share more of my personal thoughts on why I think catastrophic sabotage is important and why I care about it as a threat model. Note that this isn’t in any way intended to be a reflection of Anthropic's views or for that matter anyone's views but my own—it's just a collection of some of my personal thoughts.

First, some high-level thoughts on what I want to talk about here:

  • I want to focus on a level of future capabilities substantially beyond current models, but below superintelligence: specifically something approximately human-level and substantially transformative, but not yet superintelligent.
    • While I don’t think that most of the proximate cause of AI existential risk comes from such models—I think most of the direct takeover [...]

---

Outline:

(02:31) Why is catastrophic sabotage a big deal?

(02:45) Scenario 1: Sabotage alignment research

(05:01) Necessary capabilities

(06:37) Scenario 2: Sabotage a critical actor

(09:12) Necessary capabilities

(10:51) How do you evaluate a model's capability to do catastrophic sabotage?

(21:46) What can you do to mitigate the risk of catastrophic sabotage?

(23:12) Internal usage restrictions

(25:33) Affirmative safety cases

---

First published:

October 22nd, 2024

Source:

https://www.lesswrong.com/posts/Loxiuqdj6u8muCe54/catastrophic-sabotage-as-a-major-threat-model-for-human

---

Narrated by TYPE III AUDIO.

...more
View all episodesView all episodes
Download on the App Store

LessWrong (30+ Karma)By LessWrong


More shows like LessWrong (30+ Karma)

View all
Making Sense with Sam Harris by Sam Harris

Making Sense with Sam Harris

26,366 Listeners

Conversations with Tyler by Mercatus Center at George Mason University

Conversations with Tyler

2,383 Listeners

The Peter Attia Drive by Peter Attia, MD

The Peter Attia Drive

7,944 Listeners

Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas by Sean Carroll | Wondery

Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas

4,137 Listeners

ManifoldOne by Steve Hsu

ManifoldOne

87 Listeners

Your Undivided Attention by Tristan Harris and Aza Raskin, The Center for Humane Technology

Your Undivided Attention

1,459 Listeners

All-In with Chamath, Jason, Sacks & Friedberg by All-In Podcast, LLC

All-In with Chamath, Jason, Sacks & Friedberg

9,050 Listeners

Machine Learning Street Talk (MLST) by Machine Learning Street Talk (MLST)

Machine Learning Street Talk (MLST)

88 Listeners

Dwarkesh Podcast by Dwarkesh Patel

Dwarkesh Podcast

386 Listeners

Hard Fork by The New York Times

Hard Fork

5,422 Listeners

The Ezra Klein Show by New York Times Opinion

The Ezra Klein Show

15,220 Listeners

Moonshots with Peter Diamandis by PHD Ventures

Moonshots with Peter Diamandis

473 Listeners

No Priors: Artificial Intelligence | Technology | Startups by Conviction

No Priors: Artificial Intelligence | Technology | Startups

120 Listeners

Latent Space: The AI Engineer Podcast by swyx + Alessio

Latent Space: The AI Engineer Podcast

76 Listeners

BG2Pod with Brad Gerstner and Bill Gurley by BG2Pod

BG2Pod with Brad Gerstner and Bill Gurley

456 Listeners