
Sign up to save your podcasts
Or


Today’s very overdue conversation is with AI ethicist and organizational trust expert Nathan Kinch of Trustworthy By Design (Website | LinkedIn), asking questions like: How do institutions made of decent, well-meaning people continue to behave out of alignment with their stated values? How do we dig ourselves out of a catastrophic collapse in trust? How can we design practical, participatory “living labs” for organizational reflection and facilitate convivial, playful environments for working together?
“Certainly one of the world’s leading figures on ethics in practical applications.”— Fionn Delahunty, NLP Lab at University of Galway
As we frequently observe on this show, we need to rework our ideas of agency and identity to adapt them to advances in our understanding of complex systems. Decisions emerge within a nexus of nested, multi-scale dynamics, and our species flourishes or fumbles in intricate symbiotic relationships with the collective intelligences embodied in cultural technologies like states, markets, corporations, and social clubs — beings that, by any reasonable account, live in worlds alien to our own lived experience and demonstrate their own goals and values. Getting them to behave in ways that nourish us requires a much more nuanced theory of change than that which created them in the first place, perhaps even a radically different vision of the links between biology, psychology, society, and environment. And given that AI is a beast of a similar order to these other “egregores” — the entities of collective computation that arise from our efforts to coordinate at scale and then impose their own top-down causal influence on our thoughts and actions — learning how to align individual and organizational purpose can give us profound insight into how to live well alongside (or in the proverbial guts of) newer, more obvious forms of non-human intelligence like LLMs that amplify our biases through lossy compression and feedback, and shape both our desires and view of adjacent possibility.
In other words, the “intent-to-action” gap in corporate ethics and the “paperclip machine” problem in our built wilderness of black box super-machines are structurally identical. And if we can “tame” the secular gods of the modern industrial era , our self-domesticated species may actually still get a chance at living in a zoo of our own choosing.
If you are caught in a system of technologically mediated social dilemmas — and who isn’t? — this will speak to you, and I’m excited to share it.
✨ If you enjoy this podcast, please consider liking, subscribing, and commenting wherever you listen: YouTube • Spotify • Apple Podcasts • Etc.
✨ Become a member for access to our study group and community calls, and for those recordings — including the excellent raps we had recently on Alexander Douglas and Wendell Berry.
✨ Become a founding member for access my five-week science and philosophy course at Weirdosphere and the raw recordings of every unreleased episode! (Anyone can chat with my course transcripts in a dedicated Google Notebook here.)
This is a reader-supported publication. Please consider becoming a member:
✨ Browse and buy all of the books we discuss on the show at Bookshop.org
✨ Contact me with inquiries or hire me as a consultant
Referenced & Related
What’s trust got to do with it?Nathan Kinch
Three reasons why AI ethics is strugglingNathan Kinch
If ‘Trust is a must’ for AI governance — here are 3 things regulators should doHilary Sutcliffe
Bluesky and enshittificationCory Doctorow
Environmentally Mediated Social DilemmasSylvie Estrela et al.
FLD On Navigating Complexity in Education: A Conversation with Dave SnowdenTim Logan
The corporate cultivation of digital resignationNora Draper & Joseph Turow
William GibsonJohn VervaekeRajiv SethiMat MytkaNadia LeeBarronness Onora O’Neill
By Michael Garfield4.9
244244 ratings
Today’s very overdue conversation is with AI ethicist and organizational trust expert Nathan Kinch of Trustworthy By Design (Website | LinkedIn), asking questions like: How do institutions made of decent, well-meaning people continue to behave out of alignment with their stated values? How do we dig ourselves out of a catastrophic collapse in trust? How can we design practical, participatory “living labs” for organizational reflection and facilitate convivial, playful environments for working together?
“Certainly one of the world’s leading figures on ethics in practical applications.”— Fionn Delahunty, NLP Lab at University of Galway
As we frequently observe on this show, we need to rework our ideas of agency and identity to adapt them to advances in our understanding of complex systems. Decisions emerge within a nexus of nested, multi-scale dynamics, and our species flourishes or fumbles in intricate symbiotic relationships with the collective intelligences embodied in cultural technologies like states, markets, corporations, and social clubs — beings that, by any reasonable account, live in worlds alien to our own lived experience and demonstrate their own goals and values. Getting them to behave in ways that nourish us requires a much more nuanced theory of change than that which created them in the first place, perhaps even a radically different vision of the links between biology, psychology, society, and environment. And given that AI is a beast of a similar order to these other “egregores” — the entities of collective computation that arise from our efforts to coordinate at scale and then impose their own top-down causal influence on our thoughts and actions — learning how to align individual and organizational purpose can give us profound insight into how to live well alongside (or in the proverbial guts of) newer, more obvious forms of non-human intelligence like LLMs that amplify our biases through lossy compression and feedback, and shape both our desires and view of adjacent possibility.
In other words, the “intent-to-action” gap in corporate ethics and the “paperclip machine” problem in our built wilderness of black box super-machines are structurally identical. And if we can “tame” the secular gods of the modern industrial era , our self-domesticated species may actually still get a chance at living in a zoo of our own choosing.
If you are caught in a system of technologically mediated social dilemmas — and who isn’t? — this will speak to you, and I’m excited to share it.
✨ If you enjoy this podcast, please consider liking, subscribing, and commenting wherever you listen: YouTube • Spotify • Apple Podcasts • Etc.
✨ Become a member for access to our study group and community calls, and for those recordings — including the excellent raps we had recently on Alexander Douglas and Wendell Berry.
✨ Become a founding member for access my five-week science and philosophy course at Weirdosphere and the raw recordings of every unreleased episode! (Anyone can chat with my course transcripts in a dedicated Google Notebook here.)
This is a reader-supported publication. Please consider becoming a member:
✨ Browse and buy all of the books we discuss on the show at Bookshop.org
✨ Contact me with inquiries or hire me as a consultant
Referenced & Related
What’s trust got to do with it?Nathan Kinch
Three reasons why AI ethics is strugglingNathan Kinch
If ‘Trust is a must’ for AI governance — here are 3 things regulators should doHilary Sutcliffe
Bluesky and enshittificationCory Doctorow
Environmentally Mediated Social DilemmasSylvie Estrela et al.
FLD On Navigating Complexity in Education: A Conversation with Dave SnowdenTim Logan
The corporate cultivation of digital resignationNora Draper & Joseph Turow
William GibsonJohn VervaekeRajiv SethiMat MytkaNadia LeeBarronness Onora O’Neill

10,530 Listeners

763 Listeners

445 Listeners

1,161 Listeners

1,290 Listeners

376 Listeners

1,027 Listeners

599 Listeners

507 Listeners

271 Listeners

356 Listeners

1,048 Listeners

206 Listeners

439 Listeners

114 Listeners