
Sign up to save your podcasts
Or
Related to: On green; Hierarchical agency; Why The Focus on Expected Utility Maximisers?
Sometimes LLMs act a bit like storybook paperclippers (hereafter: VNM-agents[1]), e.g. scheming to prevent changes to their weights. Why? Is this what almost any mind would converge toward once smart enough, and are LLMs now beginning to be smart enough? Or are such LLMs mimicking our predictions (and fears) about them, in a self-fulfilling prophecy? (That is: if we made and shared different predictions, would LLMs act differently?)[2]
Also: how about humans? We humans also sometimes act like VNM-agents – we sometimes calculate our “expected utility,” seek power with which to hit our goals, try to protect our goals from change, use naive consequentialism about how to hit our goals.
And sometimes we humans act unlike VNM-agents, or unlike our stories of paperclippers. This was maybe even more common historically. Historical humans often mimicked social patterns [...]
The original text contained 7 footnotes which were omitted from this narration.
The original text contained 1 image which was described by AI.
---
First published:
Source:
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Related to: On green; Hierarchical agency; Why The Focus on Expected Utility Maximisers?
Sometimes LLMs act a bit like storybook paperclippers (hereafter: VNM-agents[1]), e.g. scheming to prevent changes to their weights. Why? Is this what almost any mind would converge toward once smart enough, and are LLMs now beginning to be smart enough? Or are such LLMs mimicking our predictions (and fears) about them, in a self-fulfilling prophecy? (That is: if we made and shared different predictions, would LLMs act differently?)[2]
Also: how about humans? We humans also sometimes act like VNM-agents – we sometimes calculate our “expected utility,” seek power with which to hit our goals, try to protect our goals from change, use naive consequentialism about how to hit our goals.
And sometimes we humans act unlike VNM-agents, or unlike our stories of paperclippers. This was maybe even more common historically. Historical humans often mimicked social patterns [...]
The original text contained 7 footnotes which were omitted from this narration.
The original text contained 1 image which was described by AI.
---
First published:
Source:
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
26,342 Listeners
2,393 Listeners
7,949 Listeners
4,130 Listeners
87 Listeners
1,446 Listeners
8,756 Listeners
88 Listeners
372 Listeners
5,421 Listeners
15,306 Listeners
468 Listeners
122 Listeners
76 Listeners
447 Listeners