Introduction: defining goals, picking the right method, and planning the experiment.
I must admit “research” does sound a bit imposing. Like something expensive, time-consuming and calling for a peer-reviewed publication in an esteemed magazine. Which is not something you do between lunch and a ping-pong match. So why even bother?
Let’s start from the very beginning: the definition of UX research.
UX research is a practice of observation of a user’s interaction with the object in question to find insights and spot the weak links. You may find a bit more complicated definitions all over the web, but still, they don’t say a word about wires, sensors, and an interrogation room. Observation is not that expensive, right?
The thing is you can learn plenty to improve your product drastically using simple techniques. The outcome doesn’t always correlate with the effort put into research. In fact, excessive data may overwhelm you, which would make it difficult to process the results and act on your findings. Long story short: it’s better to run your research hallway style than wait till you have the lab set up for you.
By UX research, people often mean either a usability testing in a lab with a one-way mirror or a study of best practices. They are both indeed research methods, but they both are little pieces of the wide variety. All of them break into two general categories: qualitative and quantitative.
They focus on observation rather than gathering numerical data. Those methods are well suited for answering questions about the reasons that caused a problem.
Interview, diary studies, and usability testing are qualitative methods.
Those methods are suitable for measuring success or finding out about weak links. They are best to answer questions like “how many,” “how much,” “how often,” etc. Clickstream analytics, A/B testing, and survey are quantitative methods.
Choosing the right approach is key to finding a comprehensive answer to your question. Though, some sophisticated tasks may require a combination of methods to get the optimal result.
Over time I came up with the general approach: I divide my goals into two groups and use particular methods depending on the circumstance. The first condition is whether I have access to actual users of the product I’m working on or not. The second condition is the feature itself: is it single interaction or a loop? Is the feedback immediate or delayed in time?
Single interactions are photo cropping, content sharing, creating an item — you see the result of an action — and observe a user’s reaction right away.
Loops and features with the delayed feedback are notifications, recommendations, adaptive interface patterns like personalization, etc. You can’t tell whether it works well or not on the spot — some observation needed to evaluate the success of those kinds of features.
I encourage you to experiment with your approach and combine different methods as you see fit. Here’s the usual railway track for my train of thoughts on this matter.
Tip: If you and your stakeholders can’t agree on a plan, go ahead with whatever your team suggests. You’ll be able to adjust on the go and save some time on meaningless bickering.
There is a lot of fuss about usability testing and particular techniques, so I guess it won’t hurt to clarify some terms.
The idea is that you grab your prototype, go outside your office, recruit respondents in the hallway, and test on the spot. Those tests called “Hallway” because of the way they are conducted: fast and cheap, without much preparation. Technically, it’s not a method; it’s an approach. Meaning, we can run any test hallway style. This whole series is about more or less hallway testing.
Focus group is a gathering of random people tasked to discuss their perception of a brand or a product. It’s widely used in marketing agency to improve the image of the product by learning what people think of it. It may or may not be related to customer experience and a user experience subsequently, but it’s not a UX research tool. Also, I believe, it’s not practical to work with people in groups, because the most “loud” members tend to influence the rest of the group which compromises results of the experiment.
Parallel running (aka split testing aka A/B testing)
The goal of this experiment is to compare somewhat mature versions of the product to learn which one performs better. The team releases two or more versions simultaneously to different randomly formed subsets of users. Then they check out the performance of all the options and let the winner into the world. Usually, the stable version is still running for the majority of users so that the unsuccessful updates won’t be dramatic for the company.
I’m a firm believer in the power of the process. It’s almost impossible to fail if you are following the well-designed plan. Here are some tips to help you build the structure for your venture.
Define the right questions before starting the research
Ask specific questions to get accurate answers. The answers you get are, in fact, the issues you are going to fix later on. The clearer your problems are, the more realistic are your goals. Vague and ambiguous requests are challenging to fulfill.
Bad question: Is the UX of my app intuitive to users?
Good question: Do users understand how to use the price calculator?
You may get eager to collect all the available data at once and then look for insights within. Resist the temptation and define what you’re looking for in advance; otherwise, a dataset is going to be overwhelming. Be smart about your resources and decide how much effort you’re willing to put in your analysis before collecting any data. Ask yourself no more than 2–3 questions at a time. If you have more, prioritize them and start with the important ones.
Example: You’re designing a site for a dropshipping company which specialty is furniture and home decor. You want to learn whether users would have any difficulties finding the right stuff, comparing prices from different stores, selecting size and color? Is the checkout flow optimal? Would users trust your client with the delivery of glass and mirrors? Would they be comfortable providing credit card data? It’s a lot of questions, and they are legit too. Though, it’s way too much for a single experiment. In this case, define the critical question here and proceed with it. For e-commerce, it likely would be search and checkout flow, because those are vital for a business to function.
Start with defining what kind of results would signify success and what are the markers of it. Define metrics by which you’re going to measure whether the UI performed as planned. Make sure they valid as an answer to your questions. Compare your findings and your expectations.
Example: We want to know if the checkout flow for the drop-shipping company is optimal. How would we measure success in this case? The first metric is the task completeness: if users can’t finish the task, our client fails to sale. The second one is the time needed to complete the job. In the experiment environment, people usually feel obligated to complete the task no matter what, so they are willing to spend as much time as needed. But in real life, their attention span is way shorter. If it takes more than a few minutes to complete the purchase, they would likely to drop. Measure how long it takes you and your team to check out and compare this time with the time your testers take to do the same. If the difference is significant, look for the weak links in your design.
Material your gather during an interview and observation might be excessive. Wrap it tight to present to your team.
Process the outcome
go through your recordings or notes once again and find patterns. Refine the carcass.
Make it relatable for your audience
your stakeholders should understand the current work process of people for whom you build. You may use some quotes or stories if you believe it would help your team to get a better grasp of the context.
Prepare a deck
using the results of your processing. If possible, represent your findings graphically to spare your team all the reading.
Call a meeting
to present the results. You are the one who possesses valuable information about your users. Share it with your team and be prepared to answer some questions. Send out your deck after the meeting.
Keep track of the product
during the whole circle of development. Even if preliminary research done well, people tend to get carried away with their ideas of what’s best for the product. That leads them sideways from the needs of actual users. Make sure everyone keeps in mind the ultimate goal of the project at all times.
Tip: prepare decks and present results of your work to your team wherever possible. You’ll get a better chance to be heard and understood when delivering results in person. It doesn’t have to be 40-something-slides-decks and exhausting meetings though: a few slides and a 15 minutes presentation would work just fine for casual research.
Use your results
Getting results is one thing; using them is the whole other story. I’ve seen a lot of well-conducted experiments full of insights sitting on the bottom drawer for ages. Mostly because stakeholders weren’t ready to admit their first solution wasn’t perfect. Anyone else can make an error: after all, we are just human beings. As for me? No, never. Something must be wrong with those users, or the prototype wasn’t ready for testing. Let’s check it out later.
Or best — never.
There’re no universal solutions — some of them are going to fail now and then. The faster you get over the fact not every single one of your initial ideas is pure gold, the better off you’re going to be as a professional. Keep your expectations of yourself adequate and carry on.
Tip: read popular science books to advance your research skills. Authors write about lots of experiments and describe the methodology researchers used to get their results. They also bust myths explaining what’s wrong with methods of faulty tests. It would help you to make sense of scientific methods and avoid biases.
UX Research is an idea which is easy to pitch but tricky to implement. There’re several reasons for that. All of them more or less legit: some or they are budget-related, some psychological, so I suggest we address them differently.
As people who are accountable for the overall results, managers are concerned about the project running time and the total cost of it. They are often resistant about spending resources to validate solutions they believe would perform well as is. Or ones which clients have already agreed on. They do have a point here: some kinds of research require lab equipment or specialized training to perform.
The quality of the outcome does not directly depend on the time and money spent on research. In fact, you can still have plenty of insights running your tests fast and dirty. DYI experiment won’t get you a vast knowledge of your users, but it won’t make you fall behind the deadline as well. Nevertheless, it can show you where you went wrong. Fixing logical bugs before coding will save you some paid hours of highly-rated professionals later on.
The common pitfall of testing is an emotional reaction to results. It’s in human nature to get frustrated when something goes not as planned, and the denial is a natural response to this frustration. That’s one of the reasons why people tend to ignore the findings or even deny the importance of the research itself.
The advice here is to assign designers and managers who work on other projects to research. Ones who do the testing should treat the other party as a client and avoid any value judgment when presenting reports. The thing is that complexity grows as our technology evolve. There are no right answers anymore; the only ones left are placed somewhere between “adequate” and “having potential.” It’s relatively simple to copycat Airbnb or Uber, but once you are up for something more, adventures begin. Respect your colleagues and the work they do. Nobody is always right about everything.
It’s never easy to learn you were wrong, but it’s how our expertise and the business grow. The key to the successful implementation of these techniques is mutual respect and support.