The field of user experience has a wide range of research methods available, ranging from tried-and-true methods such as lab-based usability studies to those that have been more recently developed, such as unmoderated online UX assessments.

While it's not realistic to use the full set of methods on a given project, nearly all projects would benefit from multiple research methods and from combining insights. Unfortunately many design teams only use one or two methods that they are familiar with.  The key question is what to do when. To better understand when to use which method, it is helpful to view them along a 3-dimensional framework with the following axes:

The following chart illustrates where 20 popular methods appear along these dimensions:

Chart of 20 user research methods, classified along 3 dimensions
Each dimension provides a way to distinguish among studies in terms of the questions they answer and the purposes they are most suited for.

The Attitudinal vs. Behavioral Dimension

This distinction can be summed up by contrasting "what people say" versus "what people do" (very often the two are quite different). The purpose of attitudinal research is usually to understand or measure people's stated beliefs, which is why attitudinal research is used heavily in marketing departments.

While most usability studies should rely more on behavior, methods that use self-reported information can still be quite useful to designers. For example, card sorting provides insights about users' mental model of an information space, and can help determine the best information architecture for your product, application, or website. Surveys measure and categorize attitudes or collect self-reported data that can help track or discover important issues to address. Focus groups tend to be less useful for usability purposes, for a variety of reasons, but provide a top-of-mind view of what people think about a brand or product concept in a group setting.

On the other end of this dimension, methods that focus mostly on behavior seek to understand "what people do" with the product or service in question. For example A/B testing presents changes to a site's design to random samples of site visitors, but attempts to hold all else constant, in order to see the effect of different site-design choices on behavior, while eyetracking seeks to understand how users visually interact with interface designs.

Between these two extremes lie the two most popular methods we use: usability studies and field studies. They utilize a mixture of self-reported and behavioral data, and can move toward either end of this dimension, though leaning toward the behavioral side is generally recommended.

The Qualitative vs. Quantitative Dimension

The distinction here is an important one, and goes well beyond the narrow view of qualitative as “open ended” as in an open-ended survey question. Rather, studies that are qualitative in nature generate data about behaviors or attitudes based on observing them directly, whereas in quantitative studies, the data about the behavior or attitudes in question are gathered indirectly, through a measurement or an instrument such as a survey or an analytics tool. In field studies and usability studies, for example, the researcher directly observes how people use technology (or not) to meet their needs. This gives them the ability to ask questions, probe on behavior, or possibly even adjust the study protocol to better meet its objectives. Analysis of the data is usually not mathematical.

By contrast, insights in quantitative methods are typically derived from mathematical analysis, since the instrument of data collection (e.g., survey tool or web-server log) captures such large amounts of data that are easily coded numerically.

Due to the nature of their differencesqualitative methods are much better suited for answering questions about why or how to fix a problem, whereas quantitative methods do a much better job answering how many and how much types of questions. Having such numbers helps prioritize resources, for example to focus on issues with the biggest impact. The following chart illustrates how the first two dimensions affect the types of questions that can be asked:

Two dimentions of questions that can be answered by user research

The Context of Product Use

The third distinction has to do with how and whether participants in the study are using the product or service in question. This can be described as:

  • Natural or near-natural use of the product
  • Scripted use of the product
  • Not using the product during the study
  • hybrid of the above

When studying natural use of the product, the goal is to minimize interference from the study in order to understand behavior or attitudes as close to reality as possible. This provides greater validity but less control over what topics you learn about.  Many ethnographic field studies attempt to do this, though there are always some observation biases. Intercept surveys and data mining or other analytic techniques are quantitative examples of this.

scripted study of product usage is done in order to focus the insights on specific usage aspects, such as on a newly redesigned flow. The degree of scripting can vary quite a bit, depending on the study goals. For example, a benchmarking study is usually very tightly scripted and more quantitative in nature, so that it can produce reliable usability metrics.

Studies where the product is not used are conducted to examine issues that are broader than usage and usability, such as a study of the brand or larger cultural behaviors.

Hybrid methods use a creative form of product usage to meet their goals. For example, participatory-design methods allows users to interact with and rearrange design elements that could be part of a product experience, in order discuss how their proposed solutions would better meet their needs and why they made certain choices.  Concept-testing methods employ a rough approximation of a product or service that gets at the heart of what it would provide (and not at the details of the experience) in order to understand if users would want or need such a product or service.

Most of the methods in the chart can move along one or more dimensions, and some do so even in the same study, usually to satisfy multiple goals. For example, field studies can focus on what people say (ethnographic interviews) or what they do (extended observations); desirability studies and card sorting have both qualitative and quantitative versions; and eyetracking can be scripted or unscripted.

Phases of Product Development (the Time Dimension)

Another important distinction to consider when making a choice among research methodologies is the phase of product development and its associated objectives.

  1. STRATEGIZE: In the beginning phase of the product development, you typically consider new ideas and opportunities for the future. Research methods in this phase can vary greatly.
  2. EXECUTE: Eventually, you will reach a "go/no-go" decision point, when you transition into a period when you are continually improving the design direction that you have chosen. Research in this phase is mainly formative and helps you reduce the risk of execution.
  3. ASSESS: At some point, the product or service will be available for use by enough users so that you can begin measuring how well you are doing.  This is typically summative in nature, and might be done against the product’s own historical data or against its competitors.

The table below summarizes these goals and lists typical research approaches and methods associated with each:

  Product Development Phase
Strategize Execute Assess
Goal: Inspire, explore and choose new directions and opportunities Inform and optimize designs in order to reduce risk and improve usability Measure product performance against itself or its competition
Approach: Qualitative and Quantitative Mainly Qualitative (formative) Mainly Quantitative (summative)
Typical methods: Field studies, diary studies, surveys, data mining, or analytics Card sorting, field studies, participatory design, paper prototype, and usability studies, desirability studies, customer emails Usability benchmarking, online assessments, surveys, A/B testing

Art or Science?

While many user-experience research methods have their roots in scientific practice, their aims are not purely scientific and still need to be adjusted to meet stakeholder needs. This is why the characterizations of the methods here are meant as general guidelines, rather than rigid classifications.

In the end, the success of your work will be determined by how much of an impact it has on improving the user experience of the website or product in question. These classifications are meant to help you make the best choice at the right time.

20 UX Methods in Brief

Here’s a short description of the user research methods shown in the above chart:

Usability-Lab Studies: participants are brought into a lab, one-on-one with a researcher, and given a set of scenarios that lead to tasks and usage of specific interest within a product or service.

Ethnographic Field Studies: researchers meet with and study participants in their natural environment, where they would most likely encounter the product or service in question.

Participatory Design: participants are given design elements or creative materials in order to construct their ideal experience in a concrete way that expresses what matters to them most and why.

Focus Groups: groups of 3–12 participants are lead through a discussion about a set of topics, giving verbal and written feedback through discussion and exercises.

Interviews: a researcher meets with participants one-on-one to discuss in depth what the participant thinks about the topic in question.

Eyetracking: an eyetracking device is configured to precisely measure where participants look as they perform tasks or interact naturally with websites, applications, physical products, or environments.

Usability Benchmarking: tightly scripted usability studies are performed with several participants, using precise and predetermined measures of performance.

Moderated Remote Usability Studies: usability studies conducted remotely with the use of tools such as screen-sharing software and remote control capabilities.

Unmoderated Remote Panel Studies: a panel of trained participants who have video recording and data collection software installed on their own personal devices uses a website or product while thinking aloud, having their experience recorded for immediate playback and analysis by the researcher or company.

Concept Testing: a researcher shares an approximation of a product or service that captures the key essence (the value proposition) of a new concept or product in order to determine if it meets the needs of the target audience; it can be done one-on-one or with larger numbers of participants, and either in person or online.

Diary/Camera Studies: participants are given a mechanism (diary or camera) to record and describe aspects of their lives that are relevant to a product or service, or simply core to the target audience; diary studies are typically longitudinal and can only be done for data that is easily recorded by participants.

Customer Feedback: open-ended and/or close-ended information provided by a self-selected sample of users, often through a feedback link, button, form, or email.

Desirability Studies: participants are offered different visual-design alternatives and are expected to associate each alternative with a set of  attributes selected from a closed list; these studies can be both qualitative and quantitative.

Card Sorting: a quantitative or qualitative method that asks users to organize items into groups and assign categories to each group. This method helps create or refine the information architecture of a site by exposing users’ mental models.

Clickstream Analysis: analyzing the record of screens or pages that users clicks on and sees, as they use a site or software product; it requires the site to be instrumented properly or the application to have telemetry data collection enabled.

A/B Testing (related to “multivariate testing,” “live testing,” or “bucket testing”): a method of scientifically testing different designs on a site by randomly assigning groups of users to interact with each of the different designs and measuring the effect of these assignments on user behavior.

Unmoderated UX Studies: a quantitative or qualitative and automated method that uses a specialized research tool to captures participant behaviors (through software installed on participant computers/browsers) and attitudes (through embedded survey questions), usually by giving participants goals or scenarios to accomplish with a site or prototype.

True-Intent Studies: a method that asks random site visitors what their goal or intention is upon entering the site, measures their subsequent behavior, and asks whether they were successful in achieving their goal upon exiting the site.

Intercept Surveys: a survey that is triggered during the use of a site or application.

Email Surveys: a survey in which participants are recruited from an email message.

In-Depth Course

More details about the methods and the dimensions of use in the full-day training course on User Research Methods: From Strategy to Requirements to Design.