LessWrong (30+ Karma)

“[Intuitive self-models] 8. Rooting Out Free Will Intuitions” by Steven Byrnes


Listen Later

8.1 Post summary / Table of contents

This is the final post of the Intuitive Self-Models series.

One-paragraph tl;dr: This post is, in a sense, the flip side of Post 3. Post 3 centered around the suite of intuitions related to free will. What are these intuitions? How did these intuitions wind up in my brain, even when they have (I argue) precious little relation to real psychology or neuroscience? But Post 3 left a critical question unaddressed: If free-will-related intuitions are the wrong way to think about the everyday psychology of motivation—desires, urges, akrasia, willpower, self-control, and more—then what's the right way to think about all those things? In this post, I offer a framework to fill that gap.

Slightly longer intro and summary: Back in Post 3, I argued that the way we conceptualize free will, agency, desires, and decisions in the “Conventional Intuitive Self-Model” (§3.2) bears [...]

---

Outline:

(00:06) 8.1 Post summary / Table of contents

(07:16) 8.2 Recurring series theme: Intuitive self-models have less relation to motivation than you’d think

(11:34) 8.3 …However, the intuitive self-model can impact motivation via associations

(14:04) 8.4 How should we think about motivation?

(14:44) 8.4.1 The framework I’m rejecting

(17:42) 8.4.2 My framework: valence, associations, and brainstorming

(21:15) 8.5 Six worked examples

(21:44) 8.5.1 Example 1: Implicit (non-self-reflective) desire

(22:20) 8.5.2 Example 2: Explicit (self-reflective) desire

(24:12) 8.5.3 Example 3: Akrasia

(26:34) 8.5.4 Example 4: Fighting akrasia with attention control

(28:52) 8.5.5 Example 5: The homunculus's monopoly on sophisticated brainstorming and planning

(33:38) 8.5.6 Example 6: Willpower

(36:41) 8.5.6.1 Aside: The “innate drive to minimize voluntary attention control”

(40:38) 8.5.6.2 Back to Example 6

(42:35) 8.6 Conclusion of the series

(43:04) 8.6.1 Bonus: How is this series related to my job description as an Artificial General Intelligence safety and alignment researcher?

The original text contained 5 footnotes which were omitted from this narration.

---

First published:

November 4th, 2024

Source:

https://www.lesswrong.com/posts/JLZnSnJptzmPtSRTc/intuitive-self-models-8-rooting-out-free-will-intuitions

---

Narrated by TYPE III AUDIO.

---

Images from the article:

Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

...more
View all episodesView all episodes
Download on the App Store

LessWrong (30+ Karma)By LessWrong


More shows like LessWrong (30+ Karma)

View all
The Daily by The New York Times

The Daily

112,847 Listeners

Astral Codex Ten Podcast by Jeremiah

Astral Codex Ten Podcast

130 Listeners

Interesting Times with Ross Douthat by New York Times Opinion

Interesting Times with Ross Douthat

7,206 Listeners

Dwarkesh Podcast by Dwarkesh Patel

Dwarkesh Podcast

531 Listeners

The Ezra Klein Show by New York Times Opinion

The Ezra Klein Show

16,150 Listeners

AI Article Readings by Readings of great articles in AI voices

AI Article Readings

4 Listeners

Doom Debates by Liron Shapira

Doom Debates

14 Listeners

LessWrong posts by zvi by zvi

LessWrong posts by zvi

2 Listeners