LessWrong (30+ Karma)

“Reward button alignment” by Steven Byrnes


Listen Later

In the context of model-based RL agents in general, and brain-like AGI in particular, part of the source code is a reward function. The programmers get to put whatever code they want into the reward function slot, and this decision will have an outsized effect on what the AGI winds up wanting to do.

One thing you could do is: hook up the reward function to a physical button on a wall. And then you’ll generally (see caveats below) get an AGI that wants you to press the button.

A nice intuition is addictive drugs. If you give a person or animal a few hits of an addictive drug, then they’re going to want it again. The reward button is just like that—it's an addictive drug for the AGI.

And then what? Easy, just tell the AGI, “Hey AGI, I’ll press the button if you do (blah)”—write a working [...]

---

Outline:

(02:20) 1. Can we actually get reward button alignment, or will there be inner misalignment issues?

(04:29) 2. Seriously? Everyone keeps saying that inner alignment is hard. Why are you so optimistic here?

(05:57) 3. Can we extract any useful work from a reward-button-aligned AGI?

(07:56) 4. Can reward button alignment be used as the first stage of a bootstrapping / AI control-ish plan?

(10:13) 5. Wouldn't it require zillions of repetitions to train the AGI to seek reward button presses?

(11:16) 6. If the AGI subverts the setup and gets power, what would it actually want to do with that power?

(13:02) 7. How far can this kind of plan go before the setup gets subverted? E.g. can we secure the reward button somehow?

(18:08) 8. If this is a bad plan, what might people do, and what should people do, to make it better?

(20:24) 9. Doesn't this constitute cruelty towards the AGI?

---

First published:

May 22nd, 2025

Source:

https://www.lesswrong.com/posts/JrTk2pbqp7BFwPAKw/reward-button-alignment

---

Narrated by TYPE III AUDIO.

...more
View all episodesView all episodes
Download on the App Store

LessWrong (30+ Karma)By LessWrong


More shows like LessWrong (30+ Karma)

View all
Making Sense with Sam Harris by Sam Harris

Making Sense with Sam Harris

26,469 Listeners

Conversations with Tyler by Mercatus Center at George Mason University

Conversations with Tyler

2,395 Listeners

The Peter Attia Drive by Peter Attia, MD

The Peter Attia Drive

7,953 Listeners

Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas by Sean Carroll | Wondery

Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas

4,144 Listeners

ManifoldOne by Steve Hsu

ManifoldOne

89 Listeners

Your Undivided Attention by Tristan Harris and Aza Raskin, The Center for Humane Technology

Your Undivided Attention

1,480 Listeners

All-In with Chamath, Jason, Sacks & Friedberg by All-In Podcast, LLC

All-In with Chamath, Jason, Sacks & Friedberg

9,236 Listeners

Machine Learning Street Talk (MLST) by Machine Learning Street Talk (MLST)

Machine Learning Street Talk (MLST)

88 Listeners

Dwarkesh Podcast by Dwarkesh Patel

Dwarkesh Podcast

426 Listeners

Hard Fork by The New York Times

Hard Fork

5,462 Listeners

The Ezra Klein Show by New York Times Opinion

The Ezra Klein Show

15,335 Listeners

Moonshots with Peter Diamandis by PHD Ventures

Moonshots with Peter Diamandis

483 Listeners

No Priors: Artificial Intelligence | Technology | Startups by Conviction

No Priors: Artificial Intelligence | Technology | Startups

121 Listeners

Latent Space: The AI Engineer Podcast by swyx + Alessio

Latent Space: The AI Engineer Podcast

75 Listeners

BG2Pod with Brad Gerstner and Bill Gurley by BG2Pod

BG2Pod with Brad Gerstner and Bill Gurley

461 Listeners