Clickstream analysis is the kind of research which is aimed to track users’ actions during their interaction with the product.
What to expect
This is a great way to get an unbiased quantitative data about users’ behavior. It works great when we need to find out whether users do what we expected.
Example: you have designed the wizard for logistics service, which offers several transportation options depending on the cargo sizes, delivery location, and several other parameters. There also are a few payment options, and the final price depends on that as well. So, you believe, you should simplify the product, because no matter how good design is, users are going to get confused. Your client thinks the choice is what their customers want. Though, they believe the better UI design would remedy the high bounce rate. You design as you said, but you determined to learn whether it worked and on which step do users bounce.
Note: though clickstream analysis works excellent for locating issues, it’s not handy to find a reason why the problem occurs. To investigate that you’ll probably want to run a moderated usability testing.
Which tools to use
The first thing I should mention here is that you can’t conduct this kind of research on your own. You are going to need assistance from engineers to implement the code in your product. Second, this kind of research has some threshold: you’ll need to watch a few educational videos to get accustomed to the interface. I recommend making a trial run to check if settings are correct.
- Google Analytics is a powerful tool that provides more data than you will ever need as a designer. It’s free to a certain extent. For mobile apps consider using Firebase.
- Mixpanel is a tool with a focus on data interpretation. Some features are free.
How to conduct
Do your homework
Study the interface of your tool of choice and make sure you have enough resources to perform this kind of research. Figure out the metrics that match the questions you have, and the way to collect them so they could be representative. Seek competent advice wherever you can.
Get engineers and managers on board
This is a complex and prolonged in time kind of research. Make sure you have all the support you may need. Not only the implementation of the analytics code matters, but also the will to use your findings, which depends on managers responsible for the product. Your insights don’t matter unless they are tasks in a backlog. Keep all the people who make this happen in a loop.
Make a test run
Chances are you won’t be happy with the outcome of your first attempt to collect and interpret the data unless you have done this before. It’s perfectly normal given it’s a niche skill for a designer. Be patient and secure your success by testing the experiment before going wide. Make sure you understand the numbers you get, and you can make a comprehensive report out of it.
Pass the guidelines to engineers
Describe your needs as clear as possible. Don’t neglect naming screens the way you’ll understand in the dataset, listing actions you’d like to monitor, etc. Engineers may see the convenient way to work with data differently, so don’t leave anything to chance.
Wait for some time after the release to check data. The right amount of time depends on the popularity of your product: for highly-loaded ones, three days is enough, and for niche ones, a month may occur just right. Avoid jumping to conclusions prematurely, because that might cause the confirmation bias and compromise the entire experiment.
Interpret results and act on it
Numbers themselves have no practical value. Your goal is to understand what they represent. Working on report define the behavior trends rather than numbers, stress issues, and draft solutions. Statistics is not the point here; you only need it to substantiate your statements.
How to deliver results
The approach you take depends on the initial question and your findings. In most severe cases, I’d recommend drawing a user flow and mark the pain points or to compare the flow you initially designed to the actual one. The visualization would help your peers to make sense of the problem faster.
When it fails
The more complicated a method is, the more chances to perform poorly it has. Take your time to prepare.
Not enough skills to use the tool
Interfaces of analytics software are highly loaded and may be overwhelming at first. Learning tools as you go is not an option here. Master it in advance.
You are collecting data for data
If you don’t have a particular question in mind, you won’t probably get any answers. If you are gathering data hoping to make sense of it afterward, you are likely losing your time
There’s too much data to process
Remember, that at the end of the day, you are the one who is going to interpret results. Have some mercy to your future self — don’t bite more than you can chew. Otherwise, you risk losing motivation before even finding any valuable insight.
Note: I often hear that designers expect to grasp the way users interact with their product via analytics. I believe, our job is to design the user flow not to figure it out post factum. The point of collecting data in our case is to learn whether we succeeded in directing users to their goal or not. So, the search for alternative ways people use your product should not be the [only] goal of the experiment. We are looking for objective feedback, not the reality show episode.