The AI podcast for product teams

Designing Agents That Work: The New Rules for AI Product Teams


Listen Later

Our latest episode explores the moment AI stops being a tool and starts becoming an organizational model. Agentic systems are already redefining how work, design, and decision‑making happen, forcing leaders to abandon deterministic logic for probabilistic, adaptive systems.

“Agentic systems force a mindshift—from scripts and taxonomies to semantics, intent, and action.”

🎧 Listen on Spotify🍎 Listen on Apple Podcasts

And if you want to go deeper, check out Kwame Nyanning’s book, Agentics: The Design of Agents and Their Impact on Innovation. It’s the definitive field guide to designing agentic systems that actually work.

Most striking for me was when discussed that we need to move from pixel-perfect to outcome-obsessed. Designers and product teams have for so long been more obsessed on the delivery of the output and now is time to be most concerned on the impact on customers.

The hard truth: Most organizations are trying to graft AI onto brittle systems built for predictability. Agentic design demands something deeper: ontological redesign, defining entities, relationships, and intents around customer outcomes, not internal structures. If you can’t model intent, you can’t build an agent.

Key takeaway: Intent capture is the new UX. Products that succeed will anticipate user context, detect discontent, and adapt autonomously.

Featured Articles: Where Reality Collides with Ambition

AI Has Flipped Software Development — Luke Wroblewski

Wroblewski lays out how AI has upended the software stack. Interfaces now generate code. Designers define the logic while engineers review and govern it. The result? Faster cycles but a dangerous illusion of progress. Design intuition becomes the new compiler, and prompt literacy replaces syntax. The real risk is velocity without comprehension; teams ship faster but learn slower.

Takeaway: Speed isn’t the problem; blind acceleration is. Governance, evaluation, and feedback loops are now design disciplines.

Agentic Workflows Explained — The Department of Product

This piece exposes what it really takes to build functioning agents: memory, planning, orchestration, cost control, fallback logic. If your “agent” doesn’t break, it’s probably not learning. Resilient systems require distributed cognition, agents reasoning and retrying within boundaries. Evaluation‑first design becomes the only safeguard against chaos.

Takeaway: If your agent never fails visibly, it’s not thinking deeply enough. Failure is how agents learn.

Featured Videos: Cutting Through the Noise

This viral video sells the dream—agents at the click of a button. The reality? Building bots has never been easier, but building agents remains brutally hard. Real agents need long‑term memory, adaptive interfaces, and feedback loops that learn from success and failure. Wiring APIs is not design; it’s plumbing. Until agents can reason, reflect, and recover, they’re glorified scripts.

Reality check: The tools are improving, but the discipline is not.

A rare honest take. This one focuses on the HCI, orchestration, and reliability problems that still plague agentic systems. We’re close to autonomous task completion, yet nowhere near persistent agency. The real challenge isn’t autonomy—it’s alignment over time.

Takeaway: Advancement is fast, but coherence is slow. Designing for recovery and evaluation is the new frontier.

Join Our Next Workshop

If you want to turn these insights into action, join our upcoming Disruptive AI Product Strategy Workshop. You’ll learn how to pressure‑test AI ideas, model agentic systems, and build products that survive beyond the hype. There’s a special 2‑for‑1 offer at the link—bring a teammate and cut the noise together.

Recommended Resource: AI & Human Behaviour — Behavioural Insights Team (2025)

BIT’s report is a must‑read for anyone designing human‑in‑the‑loop systems. It dissects four behavioural shifts: automation complacency, choice compression, empathy erosion, and algorithmic dependency.

Their experiments reveal that AI assistance can dull cognition—users who relied most on recommendations learned less and questioned less. They also found that friction builds trust; brief pauses and explanations improved comprehension and retention. The killer insight? Transparency alone doesn’t work. People often overestimate their understanding when systems explain themselves.

Takeaway: Don’t make users “trust AI.” Make them verify it. Design friction that protects judgment.

Recommended Reads: What to Study Next

* Computational Foundations of Human‑AI Interaction — Redefines how intent and alignment are measured between humans and agents.

* Understanding Ontology — “The O-word, “ontology” is here! Traditionally, you couldn’t say the word “ontology” in tech circles without getting a side-eye.”

* The Anatomy of a Personal Health Agent (Google Research) — A prototype for truly personal, proactive AI systems that act before users ask.

* What is AI Infrastructure Debt? — Why ignoring the invisible architecture behind agents is the next form of technical debt.

* AI Agents 101 (Armand Arman) — A crisp overview of the agent ecosystem, explaining architectures, limitations, and how to differentiate hype from applied design.

* Prompting Guide: Introduction to AI Agents — A concise breakdown of how prompt frameworks are evolving into agent frameworks, highlighting key mental models for builders.

* IBM Think: AI Agents Overview — IBM’s practical take on enterprise‑grade agents, covering governance, reliability, and scale.

* Beyond the Machine (Frank Chimero) — A reflection on designing meaning, not just efficiency, in an age of automation.

Design an Effective AI Strategy

I’ve helped teams at Spotify, Microsoft, the NFL, Mozilla, and Hims & Hers transform how they engage customers. If you’re trying to figure out where agents actually create value, here’s how I can help:

* Internal workflows: Identify 2–3 use‑cases that cut cycle time (intent capture → plan → act → verify), then stand up evals, cost ceilings, and recovery paths so they survive real‑world messiness.

* Customer‑facing value: Map your ontology (entities, relationships, intents), design the interface for intent and discontent, and instrument learning loops so agents get better with use.

* Proof over promise: We’ll define outcomes, build the evaluation rubric first, and price pilots on results.

Questions or want a quick read on your roadmap? Email me: [email protected].

The Bottom Line

The agentic era rewards clarity, not hype. Every designer and PM will soon face the same challenge: how to design for autonomy without abdicating control.

You can’t prompt your way to good products; you can only design your way there by grounding every decision in ontology, intent, and evaluation.



This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit designofai.substack.com
...more
View all episodesView all episodes
Download on the App Store

The AI podcast for product teamsBy Arpy Dragffy