UX - The User Experience Podcast

After 11 Years In UX, This Is The Mistake I See Everyone Making.


Listen Later

I'd love to hear from you. Get in touch!

πŸ”¬ The Observation That Prompted This Rant

  • We measure satisfaction, intention to use, overall liking β€” and then we go back to our teams and say "users don't trust it" or "satisfaction is low" and expect that to be actionable

🧠 How Experience Actually Works β€” A Quick Neuroscience Detour

  • Experience isn't one thing β€” it moves through layers: sensation β†’ perception β†’ judgment
  • Sensation is the raw signal reaching your sensors; perception is your brain integrating that into something meaningful; judgment is the conscious evaluation you emit at the end
  • Most UX research only captures the judgment β€” the tip of the iceberg β€” and skips everything underneath it
  • Knowing someone rated satisfaction a 3 out of 7 tells you nothing about what to change

🍷 The Sensory Evaluation Parallel

  • My master's specialisation was in sensory evaluation β€” how do you extract what someone actually sensed from what they perceived overall?
  • The wine, perfume, and automotive industries do this routinely: trained panels isolate attributes (texture, pitch, smell profile) and rate them independently from overall liking
  • We can and should do the same with software

πŸ“ Hassenzahl's Model β€” The Framework I Keep Coming Back To

  • Three levels: intended qualities (what the conceiver aims to produce) β†’ perceived qualities (what the user actually experiences) β†’ final judgment (satisfaction, purchase intent, etc.)
  • The gap between level one and level two is where most products fail β€” you can intend a premium feel without ever checking whether users actually perceive it as premium
  • Decompose until you can't decompose further: "premium" means nothing to an engineer β€” "high-pitched sound perceived as alarming rather than reassuring" does

πŸ’‘ What I'm Actually Asking UX Researchers to Do

  • When evaluating a product, go beyond overall satisfaction β€” ask about the attributes that compose the experience: reliability, accuracy, responsiveness, tone, whatever is relevant to your context
  • Use rating scales so you can track change over time and compare across studies β€” even imperfect numbers beat no numbers
  • If you don't have time or budget to do this with users, do it internally β€” train your team to evaluate the attributes so that when you go back to the developers, you're speaking their language

⚠️ The Cost of Not Doing This

  • You end up doing redundant research rounds because you never captured the full picture the first time
  • Your feedback loop stays shallow β€” one round of iteration, and then the team doesn't know what to do next
  • You are shooting in the air, and the product improves slowly or not at all

Support the show

Help me improve the show HERE

...more
View all episodesView all episodes
Download on the App Store

UX - The User Experience PodcastBy Jeremy