
Sign up to save your podcasts
Or


Memory Without Witness, Truth Without Origin - The Deeper Thinking Podcast
The Deeper Thinking Podcast
A slow meditation on truth without origin, memory without witness, and the subtle loss of metaphor in a world rendered by machines.
What if the future didn’t arrive with force, but with recursion? In this episode, we introduce the theory of Recursive Plausibility: the idea that truth, memory, and presence are increasingly simulated by machines trained on their own outputs. Drawing from the work of Brian Massumi, Lauren Berlant, Avery Gordon, and contemporary epistemology, we explore how AI doesn’t just displace human thought—it inherits a world already withdrawing from its capacity to remember, misread, or remain.
This is not a manifesto or forecast. It’s a conceptual walk through the fading boundaries between simulation and sensation. We reflect on simulation theory, ambient estrangement, and the ethics of unclaimed knowledge—questioning how presence, care, and cognition are altered when AI begins to feel familiar not because it understands us, but because it remembers what we’ve already taught it to forget.
Reflections
Here are some reflections that surfaced along the way:
Why Listen?
Listen On:
Support This Work
If you’d like to support the ongoing work, you can visit buymeacoffee.com/thedeeperthinkingpodcast or leave a review on Apple Podcasts. Thank you.
Bibliography
Bibliography Relevance
We do not train machines to know us. We train them to remember the shape of forgetting we already perform.
#RecursivePlausibility #AIepistemology #SimulationTheory #LaurenBerlant #BrianMassumi #AveryGordon #Baudrillard #TheDeeperThinkingPodcast #EpistemicDrift #AmbientEstrangement #DigitalPhilosophy
By The Deeper Thinking Podcast4.2
7171 ratings
Memory Without Witness, Truth Without Origin - The Deeper Thinking Podcast
The Deeper Thinking Podcast
A slow meditation on truth without origin, memory without witness, and the subtle loss of metaphor in a world rendered by machines.
What if the future didn’t arrive with force, but with recursion? In this episode, we introduce the theory of Recursive Plausibility: the idea that truth, memory, and presence are increasingly simulated by machines trained on their own outputs. Drawing from the work of Brian Massumi, Lauren Berlant, Avery Gordon, and contemporary epistemology, we explore how AI doesn’t just displace human thought—it inherits a world already withdrawing from its capacity to remember, misread, or remain.
This is not a manifesto or forecast. It’s a conceptual walk through the fading boundaries between simulation and sensation. We reflect on simulation theory, ambient estrangement, and the ethics of unclaimed knowledge—questioning how presence, care, and cognition are altered when AI begins to feel familiar not because it understands us, but because it remembers what we’ve already taught it to forget.
Reflections
Here are some reflections that surfaced along the way:
Why Listen?
Listen On:
Support This Work
If you’d like to support the ongoing work, you can visit buymeacoffee.com/thedeeperthinkingpodcast or leave a review on Apple Podcasts. Thank you.
Bibliography
Bibliography Relevance
We do not train machines to know us. We train them to remember the shape of forgetting we already perform.
#RecursivePlausibility #AIepistemology #SimulationTheory #LaurenBerlant #BrianMassumi #AveryGordon #Baudrillard #TheDeeperThinkingPodcast #EpistemicDrift #AmbientEstrangement #DigitalPhilosophy

2,548 Listeners

1,857 Listeners

1,273 Listeners

12,740 Listeners

985 Listeners

1,629 Listeners

264 Listeners

1,006 Listeners

321 Listeners

100 Listeners

860 Listeners

299 Listeners

616 Listeners

8,152 Listeners

145 Listeners