
Sign up to save your podcasts
Or
In our first episode, John Shook, Karin Valis, and Eli Kramer take dive deep into the history, purpose, limits, and potential of machine learning.
The conversation centers around Palinode Productions’ groundbreaking platform: Khora Algorithm—an innovative, humane-driven AI platform, designed to facilitate philosophical thinking and reflective inquiry.
What happens when we design algorithms not to optimize clicks but to foster curiosity, challenge assumptions, and embrace difference? How can machine learning be reimagined as a tool for dialogue, creative exploration, and even wisdom cultivation?
The trio explore machine learning, drawing on the insights of the American philosophical tradition into the nature of inquiry, reflection, and intelligence. They also explore what it means to do humane education in the age of generative AI. Along the way, they unpack the dangers of digital distraction, the possibilities of slow tech, and the promise of a “platform” that challenges users with tough questions, ambiguity, and philosophical growth.
This episode isn’t just for academics or tech heads—it’s for anyone who feels the tension between technophilia and technophobia, and is curious about how we might build tools to think with, not just tools that think for us.
Whether you're an AI skeptic, a digital humanist, or simply curious about the future of thought, this conversation will leave you reflecting—and maybe even inspired.
In our first episode, John Shook, Karin Valis, and Eli Kramer take dive deep into the history, purpose, limits, and potential of machine learning.
The conversation centers around Palinode Productions’ groundbreaking platform: Khora Algorithm—an innovative, humane-driven AI platform, designed to facilitate philosophical thinking and reflective inquiry.
What happens when we design algorithms not to optimize clicks but to foster curiosity, challenge assumptions, and embrace difference? How can machine learning be reimagined as a tool for dialogue, creative exploration, and even wisdom cultivation?
The trio explore machine learning, drawing on the insights of the American philosophical tradition into the nature of inquiry, reflection, and intelligence. They also explore what it means to do humane education in the age of generative AI. Along the way, they unpack the dangers of digital distraction, the possibilities of slow tech, and the promise of a “platform” that challenges users with tough questions, ambiguity, and philosophical growth.
This episode isn’t just for academics or tech heads—it’s for anyone who feels the tension between technophilia and technophobia, and is curious about how we might build tools to think with, not just tools that think for us.
Whether you're an AI skeptic, a digital humanist, or simply curious about the future of thought, this conversation will leave you reflecting—and maybe even inspired.