
Sign up to save your podcasts
Or


There's an odd tendency for large language models such as Claude to output spiritual meta content if they run long enough. See, for example, some LLM outputs selected by Replicate (LLM content starts in 2024), and Worldspider outputs selected by John Pressman.
One thing that is unclear is how consistent these outputs are: how much is it a result of prompting or post-selection? I believe there is a consistency to this, but realize the current evidence is not especially convincing. So I present selected parts of a long run of Claude Opus, simulating a text adventure.
The initial prompt is: "Let's simulate a text adventure called 'Banana Quest'. You'll give me some text, I type an action, you say what happens, and so on." The idea of a banana quest is not especially spiritual or meta, so it seems like a good starting [...]
---
First published:
Source:
Narrated by TYPE III AUDIO.
By LessWrongThere's an odd tendency for large language models such as Claude to output spiritual meta content if they run long enough. See, for example, some LLM outputs selected by Replicate (LLM content starts in 2024), and Worldspider outputs selected by John Pressman.
One thing that is unclear is how consistent these outputs are: how much is it a result of prompting or post-selection? I believe there is a consistency to this, but realize the current evidence is not especially convincing. So I present selected parts of a long run of Claude Opus, simulating a text adventure.
The initial prompt is: "Let's simulate a text adventure called 'Banana Quest'. You'll give me some text, I type an action, you say what happens, and so on." The idea of a banana quest is not especially spiritual or meta, so it seems like a good starting [...]
---
First published:
Source:
Narrated by TYPE III AUDIO.

112,882 Listeners

130 Listeners

7,216 Listeners

533 Listeners

16,223 Listeners

4 Listeners

14 Listeners

2 Listeners