
Sign up to save your podcasts
Or


Send us a text
Welcome back to UXChange — this is Part 2 of our series on Measuring the User Experience of AI.
In the first episode, we explored vanity vs. sanity metrics — the surface-level KPIs that look good on dashboards versus the deeper indicators that truly define trust, satisfaction, and control.
Today, we’re going a step further. I’ll break down why we need a new approach to measure AI experiences, and why our traditional UX frameworks simply don’t fit anymore.
If you’ve ever asked yourself “how do I even measure if my AI is working well?” — this episode will give you the foundation to start answering that question the right way.
Support the show
By JeremySend us a text
Welcome back to UXChange — this is Part 2 of our series on Measuring the User Experience of AI.
In the first episode, we explored vanity vs. sanity metrics — the surface-level KPIs that look good on dashboards versus the deeper indicators that truly define trust, satisfaction, and control.
Today, we’re going a step further. I’ll break down why we need a new approach to measure AI experiences, and why our traditional UX frameworks simply don’t fit anymore.
If you’ve ever asked yourself “how do I even measure if my AI is working well?” — this episode will give you the foundation to start answering that question the right way.
Support the show