
Sign up to save your podcasts
Or


AI can make mental health and wellness apps scalable… but should it?
Founder Addy Bhatia argues the point isn’t to mimic a therapist or build a “friend.” It’s to listen well, respect boundaries, and integrate care. In this episode, Addy shares why Suno avoids avatars and pronouns, why the logo is an ear, and how the product sits alongside real relationships rather than replacing them. He walks through the pivot from consumer app to working with therapists and clinics, and why “get to wow faster” matters in a crowded market.
Daniel and Addy also dig into the unglamorous decisions that shape trustworthy tools: encrypting user data, anonymizing before model calls, storing data where customers require it, and choosing different models for different jobs.
Addy explains how empathy benchmarks guide chat interactions, while smaller models handle proactive, asynchronous nudges like check‑ins after a poor night’s sleep. The result is a support mechanism that aims to help people notice patterns, prepare for tough days, and bring better context into therapy sessions.
🔑 What You’ll Learn in This Episode
🔗 Resources & Links
💬 Building in this space? Share the episode with your clinical or product team and ask one question: where should AI scale support, and where should humans stay front and center?
By Daniel ManaryAI can make mental health and wellness apps scalable… but should it?
Founder Addy Bhatia argues the point isn’t to mimic a therapist or build a “friend.” It’s to listen well, respect boundaries, and integrate care. In this episode, Addy shares why Suno avoids avatars and pronouns, why the logo is an ear, and how the product sits alongside real relationships rather than replacing them. He walks through the pivot from consumer app to working with therapists and clinics, and why “get to wow faster” matters in a crowded market.
Daniel and Addy also dig into the unglamorous decisions that shape trustworthy tools: encrypting user data, anonymizing before model calls, storing data where customers require it, and choosing different models for different jobs.
Addy explains how empathy benchmarks guide chat interactions, while smaller models handle proactive, asynchronous nudges like check‑ins after a poor night’s sleep. The result is a support mechanism that aims to help people notice patterns, prepare for tough days, and bring better context into therapy sessions.
🔑 What You’ll Learn in This Episode
🔗 Resources & Links
💬 Building in this space? Share the episode with your clinical or product team and ask one question: where should AI scale support, and where should humans stay front and center?