
Sign up to save your podcasts
Or


MIT famously claims that 95% of AI projects fail, not because the models don’t work, but because organizations aren’t built for AI.
In this episode of Almost Human, Shai Wininger, Co-Founder & President of Lemonade, explains how one of the most AI-native consumer companies rebuilt its product org, workflows, and technology stack to make AI work in production in one of the most regulated industries in the world.
We unpack Lemonade’s internal LoCo platform (an LLM-first, no-code insurance application builder), why “engineers writing code” is being replaced by engineers writing text configuration, and how specs are evolving from static Google Docs into tests that define when an AI agent is done.
Shai shares:
Why 1 engineer + AI tools can now replace traditional teams
How Lemonade iterates on pricing, underwriting, and claims with AI at scale
Why tests act as guardrails and reward functions for AI agents
How product specs, workflows, and artifacts are changing
What an AI-native product organization will look like 12 months from now
How to build AI systems that self-heal, self-improve, and eventually pursue business goals
This episode is a tactical playbook for founders and product leaders who want AI to be a durable capability, not a perpetual experiment.
Please rate this episode 5 stars wherever you stream your podcasts!
By Eden ShochatMIT famously claims that 95% of AI projects fail, not because the models don’t work, but because organizations aren’t built for AI.
In this episode of Almost Human, Shai Wininger, Co-Founder & President of Lemonade, explains how one of the most AI-native consumer companies rebuilt its product org, workflows, and technology stack to make AI work in production in one of the most regulated industries in the world.
We unpack Lemonade’s internal LoCo platform (an LLM-first, no-code insurance application builder), why “engineers writing code” is being replaced by engineers writing text configuration, and how specs are evolving from static Google Docs into tests that define when an AI agent is done.
Shai shares:
Why 1 engineer + AI tools can now replace traditional teams
How Lemonade iterates on pricing, underwriting, and claims with AI at scale
Why tests act as guardrails and reward functions for AI agents
How product specs, workflows, and artifacts are changing
What an AI-native product organization will look like 12 months from now
How to build AI systems that self-heal, self-improve, and eventually pursue business goals
This episode is a tactical playbook for founders and product leaders who want AI to be a durable capability, not a perpetual experiment.
Please rate this episode 5 stars wherever you stream your podcasts!