
Sign up to save your podcasts
Or
Memory Without Witness, Truth Without Origin - The Deeper Thinking Podcast
The Deeper Thinking Podcast
A slow meditation on truth without origin, memory without witness, and the subtle loss of metaphor in a world rendered by machines.
What if the future didn’t arrive with force, but with recursion? In this episode, we introduce the theory of Recursive Plausibility: the idea that truth, memory, and presence are increasingly simulated by machines trained on their own outputs. Drawing from the work of Brian Massumi, Lauren Berlant, Avery Gordon, and contemporary epistemology, we explore how AI doesn’t just displace human thought—it inherits a world already withdrawing from its capacity to remember, misread, or remain.
This is not a manifesto or forecast. It’s a conceptual walk through the fading boundaries between simulation and sensation. We reflect on simulation theory, ambient estrangement, and the ethics of unclaimed knowledge—questioning how presence, care, and cognition are altered when AI begins to feel familiar not because it understands us, but because it remembers what we’ve already taught it to forget.
Reflections
Here are some reflections that surfaced along the way:
Why Listen?
Listen On:
Support This Work
If you’d like to support the ongoing work, you can visit buymeacoffee.com/thedeeperthinkingpodcast or leave a review on Apple Podcasts. Thank you.
Bibliography
Bibliography Relevance
We do not train machines to know us. We train them to remember the shape of forgetting we already perform.
#RecursivePlausibility #AIepistemology #SimulationTheory #LaurenBerlant #BrianMassumi #AveryGordon #Baudrillard #TheDeeperThinkingPodcast #EpistemicDrift #AmbientEstrangement #DigitalPhilosophy
4.2
7171 ratings
Memory Without Witness, Truth Without Origin - The Deeper Thinking Podcast
The Deeper Thinking Podcast
A slow meditation on truth without origin, memory without witness, and the subtle loss of metaphor in a world rendered by machines.
What if the future didn’t arrive with force, but with recursion? In this episode, we introduce the theory of Recursive Plausibility: the idea that truth, memory, and presence are increasingly simulated by machines trained on their own outputs. Drawing from the work of Brian Massumi, Lauren Berlant, Avery Gordon, and contemporary epistemology, we explore how AI doesn’t just displace human thought—it inherits a world already withdrawing from its capacity to remember, misread, or remain.
This is not a manifesto or forecast. It’s a conceptual walk through the fading boundaries between simulation and sensation. We reflect on simulation theory, ambient estrangement, and the ethics of unclaimed knowledge—questioning how presence, care, and cognition are altered when AI begins to feel familiar not because it understands us, but because it remembers what we’ve already taught it to forget.
Reflections
Here are some reflections that surfaced along the way:
Why Listen?
Listen On:
Support This Work
If you’d like to support the ongoing work, you can visit buymeacoffee.com/thedeeperthinkingpodcast or leave a review on Apple Podcasts. Thank you.
Bibliography
Bibliography Relevance
We do not train machines to know us. We train them to remember the shape of forgetting we already perform.
#RecursivePlausibility #AIepistemology #SimulationTheory #LaurenBerlant #BrianMassumi #AveryGordon #Baudrillard #TheDeeperThinkingPodcast #EpistemicDrift #AmbientEstrangement #DigitalPhilosophy
2,501 Listeners
1,831 Listeners
12,553 Listeners
1,242 Listeners
969 Listeners
1,565 Listeners
256 Listeners
953 Listeners
301 Listeners
98 Listeners
865 Listeners
280 Listeners
569 Listeners
7,714 Listeners
92 Listeners