
Sign up to save your podcasts
Or


🎙️Grisha Pavlotsky, Chief Transformation Officer at Miro
Opening paragraph
This episode shares a conversation between Grisha Pavlotsky, CTO of Miro, and Eva Simone Lihotzky. It examines trust as a practical design problem in teams, AI systems, and everyday decision-making. The conversation is for leaders, builders, and parents trying to make sense of how judgment, accountability, and authority shift when AI becomes part of how work and learning happen. It focuses on what needs to be made explicit - intent, guardrails, and decision logic - rather than assumed.
Episode overview
Grisha draws on his work leading transformation at Miro and his experience raising four children to explore how trust holds - or breaks - when information is abundant and increasingly synthesized. The discussion moves between organizations and families, treating them as parallel systems facing the same challenge: people are no longer short on answers, but on the ability to judge, contextualize, and disagree productively. Along the way, the episode questions current education models, critiques optional AI adoption, and argues that trust depends less on confidence and more on transparency about how decisions are made and who remains accountable.
Key themes discussed
Trust as alignment on intent plus visibility into decision frameworks, not just emotional safety
How AI amplifies confidence without guaranteeing expertise, complicating collaboration
Why probabilistic systems require clear guardrails, not vague goals
The shift from producing synthesis to judging and challenging synthesized viewpoints
Education moving from teaching facts to navigating competing narratives
Identity and ego as the real blockers in large-scale transformation
Leadership responsibility in making AI adoption mandatory rather than optional
Parenting and organizational leadership as the same sense-making problem at different scales
A recurring reference is the idea - attributed to Satya Nadella - that trust is built through consistency over time, and what that consistency demands in an AI-mediated world.
By Eva Simone Lihotzky🎙️Grisha Pavlotsky, Chief Transformation Officer at Miro
Opening paragraph
This episode shares a conversation between Grisha Pavlotsky, CTO of Miro, and Eva Simone Lihotzky. It examines trust as a practical design problem in teams, AI systems, and everyday decision-making. The conversation is for leaders, builders, and parents trying to make sense of how judgment, accountability, and authority shift when AI becomes part of how work and learning happen. It focuses on what needs to be made explicit - intent, guardrails, and decision logic - rather than assumed.
Episode overview
Grisha draws on his work leading transformation at Miro and his experience raising four children to explore how trust holds - or breaks - when information is abundant and increasingly synthesized. The discussion moves between organizations and families, treating them as parallel systems facing the same challenge: people are no longer short on answers, but on the ability to judge, contextualize, and disagree productively. Along the way, the episode questions current education models, critiques optional AI adoption, and argues that trust depends less on confidence and more on transparency about how decisions are made and who remains accountable.
Key themes discussed
Trust as alignment on intent plus visibility into decision frameworks, not just emotional safety
How AI amplifies confidence without guaranteeing expertise, complicating collaboration
Why probabilistic systems require clear guardrails, not vague goals
The shift from producing synthesis to judging and challenging synthesized viewpoints
Education moving from teaching facts to navigating competing narratives
Identity and ego as the real blockers in large-scale transformation
Leadership responsibility in making AI adoption mandatory rather than optional
Parenting and organizational leadership as the same sense-making problem at different scales
A recurring reference is the idea - attributed to Satya Nadella - that trust is built through consistency over time, and what that consistency demands in an AI-mediated world.