If you, like me, work for a big company with a never-ending stream of new features being introduced, I have a question for you. When do experiments stop being experiments and become permanent features?
Spoiler alert — I am not going to talk about all the steps you need to take to set up an A/B test, I am going to concentrate on how to approach experiments with the right mindset and make sure they add value.
Imagine this scenario — someone thought it would be a good idea to introduce a quick link to something on the homepage. The test seems very simple — some users get the original homepage, some — with the new link.
But here comes the interesting bit. What are the success metrics? Surely, if we put a link in a prominent enough space it will get clicks. Is that success? Or if that drives conversion by 10%, is that success? What about 2%?
If you brush off those questions and launch into this A/B test, thinking you’ll figure it out along the way, most likely this will end up being a permanent placement. Soon enough it will be very hard to argue for any change because you’ll be faced with the ‘people are using it’ argument. And eventually those ‘experiments’ will accumulate into an unruly cluttered design.
- Have a longer-term vision for the product, broken down into phases
- Design your experiment so that you can start learning
- Set KPIs and success metrics
- Discuss with stakeholders what you will do with the learnings
- Run and analyse the experiment
- Make sure you stick to the plan you had before launching the experiment
First and foremost know your goal. What do you want to learn?
Experiments should be embedded into wider strategy and help learn as you go and iterate off the back of the learnings. Otherwise you risk falling into the trap of ‘fake experiments’ where you treat them as an A/B tests but proceed with your original plan irregardless of the results.
This brings me to my next point — make sure you have proper KPIs in place and you have the capabilities and the tech in place to measure them.
While there will be different metrics relevant for different scenarios, beware of false metrics. Say you decided that 5% engagement with the feature is enough for it to be successful. But what does it mean? Do people click and bounce off? Or do they all convert? Are we derailing traffic from somewhere else so the bottomline is still the same?
Those questions might seem obvious but I cannot stress enough how important it is to think about KPIs and making sure you can track them.
If you and your team designed the experiment well, set KPIs, planned how the learnings are going to inform your following actions, you are ready to run the experiment. However, watch out for wrapping it up too quickly. Also, account for the lifecycle of your product — any external factors could impact the outcomes so be careful with hasty judgements.
Following all of the above should set you on the right path. Even if you are not the one setting up and running the experiment it’s is still useful to know a bit more about how it’s done.
I did the Coursera Designing, Running, and Analyzing Experiments as part of my Interaction Design Course.
I believe that this course will be helpful to every interaction designer. It covers fundamentals relevant to the design and analysis of experiments, including mean comparisons, variance, statistical significance, practical significance, sampling, inclusion and exclusion criteria.
You might not need to apply more technical skills in your job if you have dedicated CRO resource. However, you will have an understanding about how your test will work, whether the results will be statistically significant and what to do next.
There are also a few books that I found helpful with ‘A/B Testing, The Most Powerful Way to Turn Clicks Into Customers’ being the most helpful.
It’s very business-focused and has some great case studies and success stories in it.
If you have any questions or want to learn more, don’t hesitate to reach out 🙂