
Sign up to save your podcasts
Or


What is an agent? It's a slippery concept with no commonly accepted formal definition, but informally the concept seems to be useful. One angle on it is Dennett's Intentional Stance: we think of an entity as being an agent if we can more easily predict it by treating it as having some beliefs and desires which guide its actions. Examples include cats and countries, but the central case is humans.
The world is shaped significantly by the choices agents make. What might agents look like in a world with advanced — and even superintelligent — AI? A natural approach for reasoning about this is to draw analogies from our central example. Picture what a really smart human might be like, and then try to figure out how it would be different if it were an AI. But this approach risks baking in subtle assumptions — [...]
---
Outline:
(04:47) Familiar examples of decomposed agency
(09:11) AI and the components of agency
(09:39) Implementation capacity
(10:37) Situational awareness
(11:35) Goals
(12:27) Planning capacity
(13:01) Planning capacity and ulterior motives
(14:47) Scaffolding
(15:51) Some questions
(16:14) Possibility space
(18:02) Efficiency
(20:22) Safety
(21:50) So what?
(23:20) Acknowledgements
The original text contained 5 footnotes which were omitted from this narration.
The original text contained 7 images which were described by AI.
---
First published:
Source:
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
By LessWrongWhat is an agent? It's a slippery concept with no commonly accepted formal definition, but informally the concept seems to be useful. One angle on it is Dennett's Intentional Stance: we think of an entity as being an agent if we can more easily predict it by treating it as having some beliefs and desires which guide its actions. Examples include cats and countries, but the central case is humans.
The world is shaped significantly by the choices agents make. What might agents look like in a world with advanced — and even superintelligent — AI? A natural approach for reasoning about this is to draw analogies from our central example. Picture what a really smart human might be like, and then try to figure out how it would be different if it were an AI. But this approach risks baking in subtle assumptions — [...]
---
Outline:
(04:47) Familiar examples of decomposed agency
(09:11) AI and the components of agency
(09:39) Implementation capacity
(10:37) Situational awareness
(11:35) Goals
(12:27) Planning capacity
(13:01) Planning capacity and ulterior motives
(14:47) Scaffolding
(15:51) Some questions
(16:14) Possibility space
(18:02) Efficiency
(20:22) Safety
(21:50) So what?
(23:20) Acknowledgements
The original text contained 5 footnotes which were omitted from this narration.
The original text contained 7 images which were described by AI.
---
First published:
Source:
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

112,842 Listeners

130 Listeners

7,215 Listeners

531 Listeners

16,221 Listeners

4 Listeners

14 Listeners

2 Listeners