
Sign up to save your podcasts
Or


Building Out Loud: Under the Hood of AI Agents (Logging, LLM Routing & Competitor Tracking)
Randy and Faith catch up on Faith’s startup progress after sending a trial to 60 people and getting useful feedback on product, selling, implementation, and how it fits into customers’ tool stacks.
This episode goes under the hood on why shipping took longer: building reliable AI agents was frustrating and required re-architecting the platform. Faith demos a competitor-monitoring workflow that evolved from one failing “competitor agent” into 8–9 narrow agents covering sources, product summary, segments, news/updates, pricing, features, integrations, and review summaries.
They discuss the need for rigorous logging, avoiding hard-coded behavior, and treating AI like a “lazy developer” that must be audited. Faith added editable prompts, scheduled/manual refresh, cost tracking, and an LLM router with fallback and bring-your-own-keys support, reducing Gemini errors from 48% to 19%, and plans a new website and product direction updates next week.
00:00 Welcome Back Setup
00:20 Trial Update Tease
00:56 Agent Dev Frustrations
01:48 Competitor Agent Breakdown
04:14 Pricing And Reviews Wins
06:32 Hardcoding And Trust
07:46 Logging Error Rates
09:58 Prompts Triggers LLM Switch
13:01 Time Estimates Rabbit Holes
14:18 Next Week Plans Wrap
By Discoveree.appBuilding Out Loud: Under the Hood of AI Agents (Logging, LLM Routing & Competitor Tracking)
Randy and Faith catch up on Faith’s startup progress after sending a trial to 60 people and getting useful feedback on product, selling, implementation, and how it fits into customers’ tool stacks.
This episode goes under the hood on why shipping took longer: building reliable AI agents was frustrating and required re-architecting the platform. Faith demos a competitor-monitoring workflow that evolved from one failing “competitor agent” into 8–9 narrow agents covering sources, product summary, segments, news/updates, pricing, features, integrations, and review summaries.
They discuss the need for rigorous logging, avoiding hard-coded behavior, and treating AI like a “lazy developer” that must be audited. Faith added editable prompts, scheduled/manual refresh, cost tracking, and an LLM router with fallback and bring-your-own-keys support, reducing Gemini errors from 48% to 19%, and plans a new website and product direction updates next week.
00:00 Welcome Back Setup
00:20 Trial Update Tease
00:56 Agent Dev Frustrations
01:48 Competitor Agent Breakdown
04:14 Pricing And Reviews Wins
06:32 Hardcoding And Trust
07:46 Logging Error Rates
09:58 Prompts Triggers LLM Switch
13:01 Time Estimates Rabbit Holes
14:18 Next Week Plans Wrap