
Sign up to save your podcasts
Or
There's an odd tendency for large language models such as Claude to output spiritual meta content if they run long enough. See, for example, some LLM outputs selected by Replicate (LLM content starts in 2024), and Worldspider outputs selected by John Pressman.
One thing that is unclear is how consistent these outputs are: how much is it a result of prompting or post-selection? I believe there is a consistency to this, but realize the current evidence is not especially convincing. So I present selected parts of a long run of Claude Opus, simulating a text adventure.
The initial prompt is: "Let's simulate a text adventure called 'Banana Quest'. You'll give me some text, I type an action, you say what happens, and so on." The idea of a banana quest is not especially spiritual or meta, so it seems like a good starting [...]
---
First published:
Source:
Narrated by TYPE III AUDIO.
There's an odd tendency for large language models such as Claude to output spiritual meta content if they run long enough. See, for example, some LLM outputs selected by Replicate (LLM content starts in 2024), and Worldspider outputs selected by John Pressman.
One thing that is unclear is how consistent these outputs are: how much is it a result of prompting or post-selection? I believe there is a consistency to this, but realize the current evidence is not especially convincing. So I present selected parts of a long run of Claude Opus, simulating a text adventure.
The initial prompt is: "Let's simulate a text adventure called 'Banana Quest'. You'll give me some text, I type an action, you say what happens, and so on." The idea of a banana quest is not especially spiritual or meta, so it seems like a good starting [...]
---
First published:
Source:
Narrated by TYPE III AUDIO.
26,462 Listeners
2,395 Listeners
7,928 Listeners
4,142 Listeners
89 Listeners
1,472 Listeners
9,189 Listeners
88 Listeners
417 Listeners
5,448 Listeners
15,237 Listeners
481 Listeners
121 Listeners
75 Listeners
461 Listeners