Experiencing Data w/ Brian T. O’Neill  (AI & data product management leadership—powered by UX design)

183 - Part II: Designing with the Flow of Work: Accelerating Sales in B2B Analytics and AI Products by Minimizing Behavior Change


Listen Later

In this second part of my three-part series (catch Part I via episode 182), I dig deeper into the key idea that sales in commercial data products can be accelerated by designing for actual user workflows—vs. going wide with a “many-purpose” AI and analytics solution that “does more,” but is misaligned with how users’ most important work actually gets done.

 

To explain this, I will explain the concept of user experience (UX) outcomes, and how building your solution to enable these outcomes may be a dependency for you to get sales traction, and for your customer to see the value of your solution. I also share practical steps to improve UX outcomes in commercial data products, from establishing a baseline definition of UX quality to mapping out users’ current workflows (and future ones, when agentic AI changes their job). Finally, I talk about how approaching product development as small “bets” helps you build small, and learn fast so you can accelerate value creation. 

 

Highlights/ Skip to:

  • Continuing the journey: designing for users, workflows, and tasks (00:32)
  • How UX impacts sales—not just usage and  adoption(02:16)
  • Understanding how you can leverage users’ frustrations and perceived risks as fuel for building an indispensable data product (04:11) 
  • Definition of a UX outcome (7:30)
  • Establishing a baseline definition of product (UX) quality, so you know how to observe and measure improvement (11:04 )
  • Spotting friction and solving the right customer problems first (15:34)
  • Collecting actionable user feedback (20:02)
  • Moving users along the scale from frustration to satisfaction to delight (23:04)
  • Unique challenges of designing B2B AI and analytics products used for decision intelligence (25:04)
  • Quotes from Today’s Episode

    One of the hardest parts of building anything meaningful, especially in B2B or data-heavy spaces, is pausing long enough to ask what the actual ‘it’ is that we’re trying to solve.

    People rush into building the fix, pitching the feature, or drafting the roadmap before they’ve taken even a moment to define what the user keeps tripping over in their day-to-day environment.

     

    And until you slow down and articulate that shared, observable frustration, you’re basically operating on vibes and assumptions instead of behavior and reality.

     

    What you want is not a generic problem statement but an agreed-upon description of the two or three most painful frictions that are obvious to everyone involved, frictions the user experiences visibly and repeatedly in the flow of work.

     

    Once you have that grounding, everything else prioritization, design decisions, sequencing, even organizational alignment suddenly becomes much easier because you’re no longer debating abstractions, you’re working against the same measurable anchor.

     

    And the irony is, the faster you try to skip this step, the longer the project drags on, because every downstream conversation becomes a debate about interpretive language rather than a conversation about a shared, observable experience.

    __

    Want people to pay for your product? Solve an *observable* problem—not a vague information or data problem. What do I mean?

    “When you’re trying to solve a problem for users, especially in analytical or AI-driven products, one of the biggest traps is relying on interpretive statements instead of observable ones.

     

    Interpretive phrasing like ‘they’re overwhelmed’ or ‘they don’t trust the data’ feels descriptive, but it hides the important question of what, exactly, we can see them doing that signals the problem.

     

    If you can’t film it happening, if you can’t watch the behavior occur in real time, then you don’t actually have a problem definition you can design around.

     

    Observable frustration might be the user jumping between four screens, copying and pasting the same value into different systems, or re-running a query five times because something feels off even though they can’t articulate why.

     

    Those concrete behaviors are what allow teams to converge and say, ‘Yes, that’s the thing, that is the friction we agree must change,’ and that shift from interpretation to observation becomes the foundation for better design, better decision-making, and far less wasted effort.

     

    And once you anchor the conversation in visible behavior, you eliminate so many circular debates and give everyone, from engineering to leadership, a shared starting point that’s grounded in reality instead of theory."

    __

    One of the reasons that measuring the usability/utility/satisfaction of your product’s UX might seem hard is that you don’t have a baseline definition of how satisfactory (or not) the product is right now. As such, it’s very hard to tell if you’re just making product *changes*—or you’re making *improvements* that might make the product worth paying for at all, worth paying more for, or easier to buy.

    "It’s surprisingly common for teams to claim they’re improving something when they’ve never taken the time to document what the current state even looks like.

    If you want to create a meaningful improvement, something a user actually feels, you need to understand the baseline level of friction they tolerate today, not what you imagine that friction might be.


    Establishing a baseline is not glamorous work, but it’s the work that prevents you from building changes that make sense on paper but do nothing to the real flow of work.
    When you diagram the existing workflow, when you map the sequence of steps the user actually takes, the mismatches between your mental model and their lived experience become crystal clear, and the design direction becomes far less ambiguous.

    That act of grounding yourself in the current state allows every subsequent decision, prioritizing fixes, determining scope, measuring progress, to be aligned with reality rather than assumptions.


    And without that baseline, you risk designing solutions that float in conceptual space, disconnected from the very pains you claim to be addressing."

    __

    Prototypes are a great way to learn—if you’re actually treating them as a means to learn, and not a product you intend to deliver regardless of the feedback customers give you. 

    "People often think prototyping is about validating whether their solution works, but the deeper purpose is to refine the problem itself.

    Once you put even a rough prototype in front of someone and watch what they do with it, you discover the edges of the problem more accurately than any conversation or meeting can reveal.


    Users will click in surprising places, ignore the part you thought mattered most, or reveal entirely different frictions just by trying to interact with the thing you placed in front of them.
    That process doesn’t just improve the design, it improves the team’s understanding of which parts of the problem are real and which parts were just guesses.

    Prototyping becomes a kind of externalization of assumptions, forcing you to confront whether you’re solving the friction that actually holds back the flow of work or a friction you merely predicted.

    And every iteration becomes less about perfecting the interface and more about sharpening the clarity of the underlying problem, which is why the teams that prototype early tend to build faster, with better alignment, and far fewer detours."

    __

    Most founders and data people tend to measure UX quality by “counting usage” of their solution. Tracking usage stats, analytics on sessions, etc. The problem with this is that it tells you nothing useful about whether people are satisfied (“meets spec”) or delighted (“a product they can’t live without”). These are product metrics—but they don’t reflect how people feel.

    There are better measurements to use for evaluating users’ experience that go beyond “willingness to pay.” 

    Payment is great, but in B2B products, buyers aren’t always users—and we’ve all bought something based on the promise of what it would do for us, but the promise fell short.

    "In B2B analytics and AI products, the biggest challenge isn’t complexity, it’s ambiguity around what outcome the product is actually responsible for changing.

     

    Teams often define success in terms of internal goals like ‘adoption,’ ‘usage,’ or ‘efficiency,’ but those metrics don’t tell you what the user’s experience is supposed to look like once the product is working well.

     

    A product tied to vague business outcomes tends to drift because no one agrees on what the improvement should feel like in the user’s real workflow.

     

    What you want are visible, measurable, user-centric outcomes, outcomes that describe how the user’s behavior or experience will change once the solution is in place, down to the concrete actions they’ll no longer need to take.

     

    When you articulate outcomes at that level, it forces the entire organization to align around a shared target, reduces the scope bloat that normally plagues enterprise products, and gives you a way to evaluate whether you’re actually removing friction rather than just adding more layers of tooling.

     

    And ironically, the clearer the user outcome is, the easier it becomes to achieve the business outcome, because the product is no longer floating in abstraction, it’s anchored in the lived reality of the people who use it."

     

    Links
    • Listen to part one: Episode 182 
    • Schedule a Design-Eyes Assessment with me and get clarity, now.
    • ...more
      View all episodesView all episodes
      Download on the App Store

      Experiencing Data w/ Brian T. O’Neill  (AI & data product management leadership—powered by UX design)By Brian T. O’Neill from Designing for Analytics

      • 4.9
      • 4.9
      • 4.9
      • 4.9
      • 4.9

      4.9

      42 ratings


      More shows like Experiencing Data w/ Brian T. O’Neill (AI & data product management leadership—powered by UX design)

      View all
      Data Skeptic by Kyle Polich

      Data Skeptic

      479 Listeners

      a16z Show by Andreessen Horowitz

      a16z Show

      1,090 Listeners

      Thoughtworks Technology Podcast by Thoughtworks

      Thoughtworks Technology Podcast

      43 Listeners

      Super Data Science: ML & AI Podcast with Jon Krohn by Jon Krohn

      Super Data Science: ML & AI Podcast with Jon Krohn

      303 Listeners

      Y Combinator Startup Podcast by Y Combinator

      Y Combinator Startup Podcast

      226 Listeners

      Design Better by The Curiosity Department, sponsored by Wix Studio

      Design Better

      319 Listeners

      DataFramed by DataCamp

      DataFramed

      268 Listeners

      Practical AI by Practical AI LLC

      Practical AI

      209 Listeners

      The Real Python Podcast by Real Python

      The Real Python Podcast

      143 Listeners

      Big Technology Podcast by Alex Kantrowitz

      Big Technology Podcast

      486 Listeners

      NN/G UX Podcast by Nielsen Norman Group

      NN/G UX Podcast

      106 Listeners

      Me, Myself, and AI by MIT Sloan Management Review

      Me, Myself, and AI

      106 Listeners

      Product Thinking by Melissa Perri

      Product Thinking

      146 Listeners

      The AI Daily Brief: Artificial Intelligence News and Analysis by Nathaniel Whittemore

      The AI Daily Brief: Artificial Intelligence News and Analysis

      594 Listeners

      The MAD Podcast with Matt Turck by Matt Turck

      The MAD Podcast with Matt Turck

      27 Listeners