
Sign up to save your podcasts
Or
Here is the process.
First you want to do an audit and analyze the results.
First you are trying to understand if this could be due to a bug or other issues like your design not showing up as you planned… And finally you are auditing the test to see if you want to reject your hypothesis outright or just reject your solution to the hypothesis.Let's say you tried to fix a visitor dilemma that appears in the selector area of a PDP.Through session recordings you have uncovered that people are frequently unsure about what scent to pick for a product.
Your hypothesis is to fix this dilemma by making it easier to understand what each scent may smell like.Your idea is to fix this by adding images next to the name. (to basically explain what the scent smells like)You start testing it but in the end you don’t get a conversion uplift.Now should you reject your hypothesis or did your solution fail, but the idea is still strong?
If your hypothesis is strong you could implement other solutions:
Adding descriptive scents we know like lavender, citrus, cinnamon that actually describe the scent in a way that everyone understands and thus reduces visitor dilemma.
Let me show you the flow we usually follow….1. We QA the test first of all. This is very important because maybe we missed something.We QA it in a few different ways.
1. We trigger the test on multiple devices and use browserstack to do this. Here we make sure we test it on mobile as well, we click all the buttons and make sure everything loads perfectly.
2. We throw our variant URL into markup.io which is a software you can use to see how closely your web version matches your original design.This is very important because it can change a lot visually from your original idea.If we found bugs at this point, we look at whether these are so big that the test has to be rerun again.
If there are none or smaller differences we proceed to the next stage.
We look at session data. Recordings are our number 1 tool here. We can quickly get a good feed for use behavior if we segment the records.Filters like:Everyone who interacted, scrolled past or clicked over the area we changed via our split-test.Everyone who didn’t bounceEveryone who added to cartEveryone who purchasedThese are great filters to use if you have a lot of traffic because you will have 100s or 1000s of recording to go through.I
A big hack is to create sub goals in your A/B test, so for example if i want people to pick a scent in the selector area of the PDP, i want to create a sub-goal of everyone who picked a scent, and perhaps everyone engaged with the scent selector.If i do this correctly, i can then go to mouseflow or convert and only look at session recordings that are relevant to my split-test.. This is a huge one guys!, makes a big difference and cuts down on time..
After recordings you can go further by also looking over heatmaps, clickmaps and so on.It can give you additional insights as well - are people doing what you expected them to do?Sometimes we also go the extra mile and setup simple user-group tests on userbrainhttps://www.userbrain.com/
In here we will present people both designs to the visitor and ask them what version they prefer.This is really good if you made bigger design changes.We mask ask what product gallery do you prefer and whyOr what selector option do you prefer and whySometimes we will also propose only our hypothesis like:The current page makes it difficult to understand what scent to pick due to fact that each name does not explain the scent associated with it. Do you agree with this statement or not? Please explain your answer in detail.This is a good way for us to test a hypothesis and whether people agree with it.We have gotten a lot of good feedback from doing this before running a test or after to test if it makes sense to design an alternative version of a solution.
Here is the process.
First you want to do an audit and analyze the results.
First you are trying to understand if this could be due to a bug or other issues like your design not showing up as you planned… And finally you are auditing the test to see if you want to reject your hypothesis outright or just reject your solution to the hypothesis.Let's say you tried to fix a visitor dilemma that appears in the selector area of a PDP.Through session recordings you have uncovered that people are frequently unsure about what scent to pick for a product.
Your hypothesis is to fix this dilemma by making it easier to understand what each scent may smell like.Your idea is to fix this by adding images next to the name. (to basically explain what the scent smells like)You start testing it but in the end you don’t get a conversion uplift.Now should you reject your hypothesis or did your solution fail, but the idea is still strong?
If your hypothesis is strong you could implement other solutions:
Adding descriptive scents we know like lavender, citrus, cinnamon that actually describe the scent in a way that everyone understands and thus reduces visitor dilemma.
Let me show you the flow we usually follow….1. We QA the test first of all. This is very important because maybe we missed something.We QA it in a few different ways.
1. We trigger the test on multiple devices and use browserstack to do this. Here we make sure we test it on mobile as well, we click all the buttons and make sure everything loads perfectly.
2. We throw our variant URL into markup.io which is a software you can use to see how closely your web version matches your original design.This is very important because it can change a lot visually from your original idea.If we found bugs at this point, we look at whether these are so big that the test has to be rerun again.
If there are none or smaller differences we proceed to the next stage.
We look at session data. Recordings are our number 1 tool here. We can quickly get a good feed for use behavior if we segment the records.Filters like:Everyone who interacted, scrolled past or clicked over the area we changed via our split-test.Everyone who didn’t bounceEveryone who added to cartEveryone who purchasedThese are great filters to use if you have a lot of traffic because you will have 100s or 1000s of recording to go through.I
A big hack is to create sub goals in your A/B test, so for example if i want people to pick a scent in the selector area of the PDP, i want to create a sub-goal of everyone who picked a scent, and perhaps everyone engaged with the scent selector.If i do this correctly, i can then go to mouseflow or convert and only look at session recordings that are relevant to my split-test.. This is a huge one guys!, makes a big difference and cuts down on time..
After recordings you can go further by also looking over heatmaps, clickmaps and so on.It can give you additional insights as well - are people doing what you expected them to do?Sometimes we also go the extra mile and setup simple user-group tests on userbrainhttps://www.userbrain.com/
In here we will present people both designs to the visitor and ask them what version they prefer.This is really good if you made bigger design changes.We mask ask what product gallery do you prefer and whyOr what selector option do you prefer and whySometimes we will also propose only our hypothesis like:The current page makes it difficult to understand what scent to pick due to fact that each name does not explain the scent associated with it. Do you agree with this statement or not? Please explain your answer in detail.This is a good way for us to test a hypothesis and whether people agree with it.We have gotten a lot of good feedback from doing this before running a test or after to test if it makes sense to design an alternative version of a solution.