
Sign up to save your podcasts
Or


Send us a text
Sauce jokes aside, we get real about AI in customer support: why it’s tempting to chase the trend, how early demos can mislead, and what actually works once you bring models into day-to-day operations. Rob Dwyer joins us to unpack the difference between toy and tool, sharing lessons from building conversational analytics and automated QA across diverse industries.
We dig into the myth of “one model to rule them all” and get specific about task fit. Generative models shine at qualitative judgments like tone, professionalism, and focus in short, discrete interactions. They stumble with counting, sequence checks, and multi‑step logic—especially across long, technical threads. That gap matters when you’re grading complex troubleshooting or evaluating revenue potential. We talk about the demo effect, why a few promising outputs aren’t proof, and how to set up meaningful evaluations with enough data to avoid false confidence.
If your world is regulated or highly technical, retrieval‑augmented generation (RAG) becomes essential. Grounding model answers in a current, curated knowledge base cuts hallucinations and keeps responses aligned with policy. But RAG is only as good as your content ops, which means version control, ownership, and clear guardrails. We also challenge the monolith instinct: mega‑suites promise simplicity yet often deliver stitched‑together mediocrity. A best‑of‑breed stack with clean integrations and strong observability gives you better performance and control.
You’ll leave with a practical framing: good use cases (assisted QA for tone and policy, summarization, intent clustering), maybe use cases (autonomy on complex workflows), and “oh hell no” zones (high‑stakes decisions without grounding or oversight). The throughline is patience—define the problem, validate with real samples, measure cost per correct outcome, and keep humans in the loop where it counts. Subscribe for more clear-eyed conversations on building support that’s fast, human, and genuinely smarter with AI.
By Charlotte Ward5
22 ratings
Send us a text
Sauce jokes aside, we get real about AI in customer support: why it’s tempting to chase the trend, how early demos can mislead, and what actually works once you bring models into day-to-day operations. Rob Dwyer joins us to unpack the difference between toy and tool, sharing lessons from building conversational analytics and automated QA across diverse industries.
We dig into the myth of “one model to rule them all” and get specific about task fit. Generative models shine at qualitative judgments like tone, professionalism, and focus in short, discrete interactions. They stumble with counting, sequence checks, and multi‑step logic—especially across long, technical threads. That gap matters when you’re grading complex troubleshooting or evaluating revenue potential. We talk about the demo effect, why a few promising outputs aren’t proof, and how to set up meaningful evaluations with enough data to avoid false confidence.
If your world is regulated or highly technical, retrieval‑augmented generation (RAG) becomes essential. Grounding model answers in a current, curated knowledge base cuts hallucinations and keeps responses aligned with policy. But RAG is only as good as your content ops, which means version control, ownership, and clear guardrails. We also challenge the monolith instinct: mega‑suites promise simplicity yet often deliver stitched‑together mediocrity. A best‑of‑breed stack with clean integrations and strong observability gives you better performance and control.
You’ll leave with a practical framing: good use cases (assisted QA for tone and policy, summarization, intent clustering), maybe use cases (autonomy on complex workflows), and “oh hell no” zones (high‑stakes decisions without grounding or oversight). The throughline is patience—define the problem, validate with real samples, measure cost per correct outcome, and keep humans in the loop where it counts. Subscribe for more clear-eyed conversations on building support that’s fast, human, and genuinely smarter with AI.