
Sign up to save your podcasts
Or


Memory Without Witness, Truth Without Origin - The Deeper Thinking Podcast
The Deeper Thinking Podcast
A slow meditation on truth without origin, memory without witness, and the subtle loss of metaphor in a world rendered by machines.
What if the future didn’t arrive with force, but with recursion? In this episode, we introduce the theory of Recursive Plausibility: the idea that truth, memory, and presence are increasingly simulated by machines trained on their own outputs. Drawing from the work of Brian Massumi, Lauren Berlant, Avery Gordon, and contemporary epistemology, we explore how AI doesn’t just displace human thought—it inherits a world already withdrawing from its capacity to remember, misread, or remain.
This is not a manifesto or forecast. It’s a conceptual walk through the fading boundaries between simulation and sensation. We reflect on simulation theory, ambient estrangement, and the ethics of unclaimed knowledge—questioning how presence, care, and cognition are altered when AI begins to feel familiar not because it understands us, but because it remembers what we’ve already taught it to forget.
Reflections
Here are some reflections that surfaced along the way:
Why Listen?
Listen On:
Support This Work
If you’d like to support the ongoing work, you can visit buymeacoffee.com/thedeeperthinkingpodcast or leave a review on Apple Podcasts. Thank you.
Bibliography
Bibliography Relevance
We do not train machines to know us. We train them to remember the shape of forgetting we already perform.
#RecursivePlausibility #AIepistemology #SimulationTheory #LaurenBerlant #BrianMassumi #AveryGordon #Baudrillard #TheDeeperThinkingPodcast #EpistemicDrift #AmbientEstrangement #DigitalPhilosophy
By The Deeper Thinking Podcast4
9292 ratings
Memory Without Witness, Truth Without Origin - The Deeper Thinking Podcast
The Deeper Thinking Podcast
A slow meditation on truth without origin, memory without witness, and the subtle loss of metaphor in a world rendered by machines.
What if the future didn’t arrive with force, but with recursion? In this episode, we introduce the theory of Recursive Plausibility: the idea that truth, memory, and presence are increasingly simulated by machines trained on their own outputs. Drawing from the work of Brian Massumi, Lauren Berlant, Avery Gordon, and contemporary epistemology, we explore how AI doesn’t just displace human thought—it inherits a world already withdrawing from its capacity to remember, misread, or remain.
This is not a manifesto or forecast. It’s a conceptual walk through the fading boundaries between simulation and sensation. We reflect on simulation theory, ambient estrangement, and the ethics of unclaimed knowledge—questioning how presence, care, and cognition are altered when AI begins to feel familiar not because it understands us, but because it remembers what we’ve already taught it to forget.
Reflections
Here are some reflections that surfaced along the way:
Why Listen?
Listen On:
Support This Work
If you’d like to support the ongoing work, you can visit buymeacoffee.com/thedeeperthinkingpodcast or leave a review on Apple Podcasts. Thank you.
Bibliography
Bibliography Relevance
We do not train machines to know us. We train them to remember the shape of forgetting we already perform.
#RecursivePlausibility #AIepistemology #SimulationTheory #LaurenBerlant #BrianMassumi #AveryGordon #Baudrillard #TheDeeperThinkingPodcast #EpistemicDrift #AmbientEstrangement #DigitalPhilosophy

90,966 Listeners

44,003 Listeners

32,317 Listeners

43,561 Listeners

15,240 Listeners

10,705 Listeners

1,543 Listeners

320 Listeners

113,344 Listeners

9,575 Listeners

459 Listeners

16,421 Listeners

1,657 Listeners

8,891 Listeners

591 Listeners