
Sign up to save your podcasts
Or
8.1 Post summary / Table of contents
This is the final post of the Intuitive Self-Models series.
One-paragraph tl;dr: This post is, in a sense, the flip side of Post 3. Post 3 centered around the suite of intuitions related to free will. What are these intuitions? How did these intuitions wind up in my brain, even when they have (I argue) precious little relation to real psychology or neuroscience? But Post 3 left a critical question unaddressed: If free-will-related intuitions are the wrong way to think about the everyday psychology of motivation—desires, urges, akrasia, willpower, self-control, and more—then what's the right way to think about all those things? In this post, I offer a framework to fill that gap.
Slightly longer intro and summary: Back in Post 3, I argued that the way we conceptualize free will, agency, desires, and decisions in the “Conventional Intuitive Self-Model” (§3.2) bears [...]
---
Outline:
(00:06) 8.1 Post summary / Table of contents
(07:16) 8.2 Recurring series theme: Intuitive self-models have less relation to motivation than you’d think
(11:34) 8.3 …However, the intuitive self-model can impact motivation via associations
(14:04) 8.4 How should we think about motivation?
(14:44) 8.4.1 The framework I’m rejecting
(17:42) 8.4.2 My framework: valence, associations, and brainstorming
(21:15) 8.5 Six worked examples
(21:44) 8.5.1 Example 1: Implicit (non-self-reflective) desire
(22:20) 8.5.2 Example 2: Explicit (self-reflective) desire
(24:12) 8.5.3 Example 3: Akrasia
(26:34) 8.5.4 Example 4: Fighting akrasia with attention control
(28:52) 8.5.5 Example 5: The homunculus's monopoly on sophisticated brainstorming and planning
(33:38) 8.5.6 Example 6: Willpower
(36:41) 8.5.6.1 Aside: The “innate drive to minimize voluntary attention control”
(40:38) 8.5.6.2 Back to Example 6
(42:35) 8.6 Conclusion of the series
(43:04) 8.6.1 Bonus: How is this series related to my job description as an Artificial General Intelligence safety and alignment researcher?
The original text contained 5 footnotes which were omitted from this narration.
---
First published:
Source:
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
8.1 Post summary / Table of contents
This is the final post of the Intuitive Self-Models series.
One-paragraph tl;dr: This post is, in a sense, the flip side of Post 3. Post 3 centered around the suite of intuitions related to free will. What are these intuitions? How did these intuitions wind up in my brain, even when they have (I argue) precious little relation to real psychology or neuroscience? But Post 3 left a critical question unaddressed: If free-will-related intuitions are the wrong way to think about the everyday psychology of motivation—desires, urges, akrasia, willpower, self-control, and more—then what's the right way to think about all those things? In this post, I offer a framework to fill that gap.
Slightly longer intro and summary: Back in Post 3, I argued that the way we conceptualize free will, agency, desires, and decisions in the “Conventional Intuitive Self-Model” (§3.2) bears [...]
---
Outline:
(00:06) 8.1 Post summary / Table of contents
(07:16) 8.2 Recurring series theme: Intuitive self-models have less relation to motivation than you’d think
(11:34) 8.3 …However, the intuitive self-model can impact motivation via associations
(14:04) 8.4 How should we think about motivation?
(14:44) 8.4.1 The framework I’m rejecting
(17:42) 8.4.2 My framework: valence, associations, and brainstorming
(21:15) 8.5 Six worked examples
(21:44) 8.5.1 Example 1: Implicit (non-self-reflective) desire
(22:20) 8.5.2 Example 2: Explicit (self-reflective) desire
(24:12) 8.5.3 Example 3: Akrasia
(26:34) 8.5.4 Example 4: Fighting akrasia with attention control
(28:52) 8.5.5 Example 5: The homunculus's monopoly on sophisticated brainstorming and planning
(33:38) 8.5.6 Example 6: Willpower
(36:41) 8.5.6.1 Aside: The “innate drive to minimize voluntary attention control”
(40:38) 8.5.6.2 Back to Example 6
(42:35) 8.6 Conclusion of the series
(43:04) 8.6.1 Bonus: How is this series related to my job description as an Artificial General Intelligence safety and alignment researcher?
The original text contained 5 footnotes which were omitted from this narration.
---
First published:
Source:
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
26,359 Listeners
2,382 Listeners
7,947 Listeners
4,135 Listeners
87 Listeners
1,449 Listeners
9,041 Listeners
88 Listeners
377 Listeners
5,420 Listeners
15,180 Listeners
474 Listeners
121 Listeners
77 Listeners
455 Listeners