LessWrong (30+ Karma)

“1. The CAST Strategy” by Max Harms


Listen Later

Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.

(Part 1 of the CAST sequence)

AI Risk Introduction

(TLDR for this section, since it's 101 stuff that many readers will have already grokked: Misuse vs Mistake; Principal-Agent problem; Omohundro Drives; we need deep safety measures in addition to mundane methods. Jump to “Sleepy-Bot” if all that seems familiar.)

Earth is in peril. Humanity is on the verge of building machines capable of intelligent action that outstrips our collective wisdom. These superintelligent artificial general intelligences (“AGIs”) are almost certain to radically transform the world, perhaps very quickly, and likely in ways that we consider catastrophic, such as driving humanity to extinction. During this pivotal period, our peril manifests in two forms.

The most obvious peril is that of misuse. An AGI which is built to serve the interests of one person or party, such as jihadists or [...]

---

Outline:

(00:10) AI Risk Introduction

(05:53) Aside: Sleepy-Bot

(07:46) The Corrigibility-As-Singular-Target Strategy

(12:58) How Can We Get Corrigibility?

(17:32) What Makes Corrigibility Special

(28:08) Contra Impure or Emergent Corrigibility

(31:01) How to do a Pivotal Act

(34:47) Cruxes and Counterpoints

(36:47) “Anti-Naturality” and Hardness

(40:37) Prosaic Methods Make Anti-Naturality Worse

(41:30) Solving Anti-Naturality at the Architectural Layer

(42:52) Aside: Natural Concepts vs Antinatural Properties

(43:46) The Effect Size of Anti-Naturality is Unclear

(45:32) “Corrigibility Isn’t Actually a Coherent Concept”

(47:07) “CAST is More Complex than Diamond, and We Can’t Even Do That”

(50:04) “General Intelligence Demands Consequentialism”

(53:11) Desiderata Lists vs Single Unifying Principle

(56:58) “Human-In-The-Loop Can’t Scale”

(59:17) Identifying the Principal is Brittle

(01:02:56) “Reinforcement Learning Only Creates Thespians”

(01:05:47) “Largely-Corrigible AGI is Still Lethal in Practice”

The original text contained 16 footnotes which were omitted from this narration.

---

First published:

June 7th, 2024

Source:

https://www.lesswrong.com/posts/3HMh7ES4ACpeDKtsW/1-the-cast-strategy

---

Narrated by TYPE III AUDIO.

...more
View all episodesView all episodes
Download on the App Store

LessWrong (30+ Karma)By LessWrong


More shows like LessWrong (30+ Karma)

View all
The Daily by The New York Times

The Daily

112,882 Listeners

Astral Codex Ten Podcast by Jeremiah

Astral Codex Ten Podcast

130 Listeners

Interesting Times with Ross Douthat by New York Times Opinion

Interesting Times with Ross Douthat

7,216 Listeners

Dwarkesh Podcast by Dwarkesh Patel

Dwarkesh Podcast

533 Listeners

The Ezra Klein Show by New York Times Opinion

The Ezra Klein Show

16,223 Listeners

AI Article Readings by Readings of great articles in AI voices

AI Article Readings

4 Listeners

Doom Debates by Liron Shapira

Doom Debates

14 Listeners

LessWrong posts by zvi by zvi

LessWrong posts by zvi

2 Listeners