In 2025, growth didn’t stall because teams lacked tools—it stalled because they lacked clarity. AI accelerated everything, dashboards multiplied, and activity increased, yet decision-making quietly got worse.
In this Best of 2025 compilation, Amanda and Adam revisit standout conversations with growth, product, and leadership operators to uncover a shared truth: the teams that won weren’t moving faster—they were seeing more clearly. From subscription app fundamentals and AI attribution to leadership focus, creative guardrails, and defensibility in an agent-led future, this episode connects the patterns that actually held up.
You’ll learn why understanding your funnel matters more than scaling it, why server logs reveal what analytics dashboards miss, where “vibe coding” breaks down, and why saying no is often the most strategic decision a leader can make.
If you’re building, leading, or navigating an AI-first world, this episode is a reminder that the fundamentals never stopped mattering.
Episode Highlights:
[00:01:01] Build Subscription Apps on Clear Metrics, Not Blind Scaling
Takeaway: If you can’t see your funnel end-to-end, you’re guessing—and scaling guesswork is how apps die.
Shumel explains that early-stage subscription app founders often rush into growth before setting up the analytics that actually matter. Many compare themselves to mature competitors with completely different economics, timelines, and data maturity. The real work starts by tagging the right events early so you can see how users move from app open to registration to engagement—and how different subscription tiers (weekly vs. annual) change behavior. Once that visibility exists, founders can model realistic unit economics like CAC, LTV, and payback period instead of chasing premature ROI. Clarity here prevents expensive scaling mistakes and gives teams a foundation they can trust.
[00:04:34] Use Server Logs as Your Most Reliable AI Attribution Signal
Takeaway: Your analytics dashboard is lying to you—server logs are the source of truth.
Jason breaks down why standard tools like GA4 fail to show how AI models interact with your content. AI systems like ChatGPT use multiple bots for training, retrieval, and other functions, and their activity never appears cleanly in traditional dashboards. Server logs, however, capture every request. By analyzing them, teams can see which content AI models actually reference, how often training bots consume data, and what traffic flows from AI tools like Perplexity. This uncomfortable clarity lets brands make smarter content decisions in an AI-driven distribution landscape where polished dashboards obscure reality.
[00:07:59] Know Where AI Accelerates Your Team—and Where It Creates Risk
Takeaway: AI should speed up judgment, not replace it.
Robert explains that “vibe coding” works well for proofs of concept and simple applications but breaks down in regulated environments and complex legacy systems. In fintech and healthcare, security, compliance, and maintainability still demand human oversight. His team uses AI tools like Microsoft Copilot to eliminate repetitive cognitive work—research, scaffolding, and suggestions—so engineers can focus on architecture and risk. The advantage isn’t letting AI build the product for you; it’s freeing your best people to make the decisions that keep the business safe and defensible.
[00:10:06] Lead Through Mission Clarity and Ruthless Focus, Not Heroic Effort
Takeaway: Too many priorities feel urgent—clarity tells teams what to ignore.
Patrick shares a leadership framework centered on repeatedly resetting the mission: who you’re building for and why. When teams juggle too many “important” initiatives, progress stalls and morale drops. His antidote is ruthless prioritization—actively unfocusing from false emergencies—and creating structured space for creativity through hackathons and design challenges. He also challenges the idea that technical PMs must code, arguing that curiosity and supported learning matter more. In fast-moving environments, clarity isn’t motivational fluff—it’s how teams survive sustained pressure.
[00:13:14] Use Principles as an Operating System, Not Decoration
Takeaway: Saying no isn’t a luxury—it’s how durable businesses are built.
Mick shows how principled constraints create long-term advantage. By declining gambling and gaming clients and focusing tightly on mobile, his agency protected quality, talent, and culture—even when cash flow was tight. Early-stage companies don’t lack ambition; they lack focus. Prioritizing payroll, vendors, and sustainable growth over personal draw builds trust internally and externally. Over time, this creates a compounding effect: trained leaders pass down standards and judgment that outside hires can’t replicate. Principles stop being slogans and become infrastructure.
[00:17:36] Use Guardrails to Unlock Creativity Instead of Stifling It
Takeaway: Constraints don’t kill creativity—they aim it.
Lindiwe explains how clear guardrails transform cross-functional chaos into productive collaboration. Designers move faster when constraints are explicit, and stakeholders contribute better feedback when they focus on strategy instead of taste. Guardrails—brand, goals, and boundaries—turn conflicting opinions into useful tension rather than endless revision cycles. When everyone understands their role, creativity becomes a problem-solving engine instead of an emotional battleground.
[00:23:09] Pick a Niche Problem, Build a Data Advantage, and Lock In Stickiness
Takeaway: Defensibility comes from speed, focus, and owning one lane deeply.
Cooper outlines how companies survive in an AI-accelerated world by solving specific, high-value problems that large AI labs won’t prioritize. As AI agents become the primary interface, switching costs—not features—create stickiness. Products trained on deep, niche data become hard to replace because retraining is expensive and risky. Fast iteration beats long build cycles, and early customer feedback compounds into a defensible data advantage. Specialists win by moving first and learning fastest.
[00:26:48] Specialize—Don’t Consolidate—in an Agent-Led Future
Takeaway: In an AI ecosystem, the best tool wins—not the biggest one.
Cooper expands on why consolidation fails in an agent-driven world. AI platforms like OpenAI rely on specialist tools through integrations rather than attempting domain mastery themselves. That means businesses should obsess over doing one thing exceptionally well instead of expanding into adjacent features. Broad platforms become mediocre; specialists become indispensable. The winners are the teams that dominate a narrow use case, collect the most relevant data, and become the default choice when AI agents delegate work.
Episode Resources:
- Shumel Lais on LinkedIn
- Jason White on LinkedIn
- Robert Armstrong on LinkedIn
- Patrick Wesonga on LinkedIn
- Mick Rigby on LinkedIn
- Lindiwe Stenberg on LinkedIn
- Cooper Simpson on LinkedIn
- Amanda Vandiver on LinkedIn
- Adam Landis on LinkedIn
- Branch on LinkedIn
- Branch Website
- How I Grew This on Apple Podcasts
- How I Grew This on Spotify
- How I Grew This on Simplecast