Every usability study is different, depending on your specific goals and constraints. But one thing is common for all user research: even though the basic methods are easy enough, if you jump into a study without proper preparation, you won’t get nearly as high a return on your research investment. Below is a checklist of activities to consider when planning a usability study.

1.    Define Goals for the Study

Meet with the stakeholders to determine what they want to learn. Identify the questions, concerns, areas of interest, and purpose for the research. The goals will determine which UX research methodology to choose.

Usability studies are well suited for gathering qualitative or quantitative behavioral data and for answering design-related questions (e.g., is the content presented in a way that is easy to find and understand? Can people complete a task successfully?). If your goal is to collect attitudinal feedback, then consider alternative research methods better suited for those purposes.

Don’t commit to too many goals: for every additional question you want answered, the quality of your insights on the other research goals will drop. You only have so much time with users: make the most of it by focusing on the research goals that’ll truly move the needle on your product ROI.

2.    Determine the Format and Setting of the Study

Below are some considerations for determining which research approach is appropriate for your situation:

In lab or in field: Should you conduct the study at your facility or go to the participant’s location? For convenience, most face-to-face usability studies are conducted in-house, in a lab setting. However, if the users’ actual environment is critical or if it’s difficult to represent the users’ setup, then the travel time might be worth it.

Moderated or unmoderated: Moderated studies tend to provide you with richer design insights and opportunities to probe and ask for clarification. They also are a better source of open-ended comments from the participants. On the other hand, unmoderated studies can be cheaper, quicker, and may provide better access to hard-to-recruit participants.

In-person or remote: In general, we recommend in-person studies whenever possible. When you are in the same room as the participant, the interaction feels more personable and you are able to detect subtle cues, such as body language, much easier. However, sometimes in-person testing may not be feasible — for instance, because you have no travel budget, because you work within an Agile environment, or because users can’t come to you.

3.    Determine the Number of Users

For traditional qualitative studies, we generally recommend 5 participants for the best return on investment. If your research involves more than one target user group, then you may need to adjust the number of participants to 2–5 per group, depending on the level of experience and attitudinal overlap between the groups.

Quantitative studies and eyetracking require a larger sample size to obtain meaningful conclusions. Expect to increase the number of participants by at least 4 times. You may need at least 20–30 participants in each target user group

4.    Recruit the Right Participants

A foundational rule in conducting user testing is to get representative participants. The greatest insights are derived from gathering feedback from real users. Identify people who match your demographics (or even better: match your personas) and then screen for behavioral traits, attitudes, and goals that match those of your users.

Asking proxy users to pretend or imagine a scenario might lead to invalid results. You may have some leeway when testing general sites, but for specialized websites, you must find people who fit your exact circumstances, especially when testing content-rich sites and B2B websites.

(For more information on participant recruitment, see our free report).

5.    Write Tasks that Match the Goals of the Study

In usability testing, researchers ask users to complete activities while using the interface. The activities (or tasks) are usually written in the form of scenarios and should match the goals of the study. The scenarios can range from general to specific, and usually come in the form of two main types:

Exploratory tasks: These open-ended tasks answer broad, research-oriented goals and may or may not have a correct answer. These tasks are meant to learn how people discover or explore information, and they are not appropriate for quantitative testing.
Example: You are interested in booking a vacation for your family. See if the site offers anything that you might suit your needs.

Specific tasks: These tasks are much more focused and usually have a correct answer or end point. They are used for both qualitative and quantitative testing.

Example: Find the Saturday opening hours for the Sunnyvale public library.

Writing solid tasks is critical in conducting a valid study. Strong tasks are concrete and free from clues that might prime people’s behavior. Vague instructions might cause users to evaluate areas that are not particularly important for this study. If clues are present, such as when the task contains the same word as on the screen, the activity is no longer a usability study, but a word-matching game.

6.    Conduct a Pilot Study

After you have written your tasks, make sure to run a pilot study to help you fine-tune the task wording, anticipate the number of tasks you can give per session, and determine the order in which to present them. Pilot studies can also help you refine your recruiting criteria, to ensure you are testing with the right participants. Better to catch problems early than during a session when all eyes are on you.

Note: Pilot studies are especially important when conducting online unmoderated studies because you are not present to give clarification or make corrections if study participants misinterpret the instructions or task descriptions.

7.    Decide on Collecting Metrics

The main purpose of qualitative usability studies is to gain design insights, and, with few users, metrics are unlikely to be representative for your whole user population  Therefore, measuring usability is usually not a high priority. In a quantitative study or in a study where you have well defined tasks and a fairly large number of users, however, this step is important.  Common usability metrics are: time on task, satisfaction ratings, success rate, and error rate.

If you decide to collect subjective measurements (e.g., ease of use or satisfaction questions about the task, satisfaction about the system), decide on when you will give the questionnaires: after each task, at the end of the session, or both.

8.    Write a Test Plan

Once you’ve figured out how you’re going to conduct the research, document your approach in a test plan and share it. This document serves as a communication tool among team members and a record for  future studies. The document doesn’t need to be lengthy, but should contain key information such as:

  • Name of the product or site being testing
  • Study goals
  • Logistics: time, dates, location, and format of study
  • Participant profiles
  • Tasks
  • Metrics, questionnaires
  • Description of the system (e.g., mobile, desktop, computer settings)

9.    Motivate Team Members to Observe Sessions

A great benefit of usability studies is fostering collaboration and buy-in. Nothing is more convincing than witnessing how users respond to the interface design. Having stakeholders observe moderated-testing sessions establishes common ground and reduces the amount of time required to communicate and document the findings. Teams spend less time guessing and debating, and more time designing.

Invite stakeholders and team members, and give them plenty of reasons to participate. Food is always a good motivator!

Conducting the study in a traditional usability lab or a simplified usability lab is ideal, but if your team members work in different locations, then offering remote viewing options keeps the activity inclusive.

Check out our course on Usability Testing for hands-on training on how to plan and facilitate user tests.