User interviews are an important generative method for UX. Generative methods (like interviews and focus groups) produce knowledge. In contrast, evaluative methods (like a usability test of a draft design) test hypotheses. When interviews are done well, we learn about our users’:

  • thoughts
  • beliefs
  • mental models
  • experiences

This knowledge helps us to build good products and services and address real user needs rather than imaginary ones. Taking the time to speak to people who might use your product or service (before you start designing) can teach you things you might otherwise never uncover, even if you run regular usability tests.

However, when done poorly, interviews lead to bad design decisions that can ultimately hurt the organization and its customers. As we have said many times, a great user interface built with the wrong features will fail. This article outlines four common issues with user interviews, along with strategies for avoiding them.

Issue #1: Wrong Purpose

Some UX teams run user interviews to glean information that cannot, in fact, be learned from interviews. Some examples of research questions which cannot be answered satisfactorily by interviewing people include:

  • Which color most improves the user’s impression of the brand or product?
  • Will people buy or use our product in the future?
  • Which features do people need in the product?

While all of the above are reasonable research questions, they will not be answered reliably by interviewing people because they are about peoples' behaviors. Interviews do not produce reliable data about user behavior. Rather, observing people is a much better way to learn about their behavior.

Let’s illustrate this with an example. Imagine a designer wants to learn what the best page-background color should be for a given design. She conducts interviews in which she shows people mockups and asks them which they prefer. There are multiple assumptions that make this study unacceptable.

  1. The participant prefers a color. Asking participants which color they prefer suggests that they should think about and have an opinion about color. They may not, but simply being asked may make them form an opinion. (This is known as the query effect.)
  2. A user's response in the interview is an accurate representation of their real-life feelings about page color. True, some participants might have a preference in the moment for a blue background rather than a purple one, but that doesn’t mean that they would have the same preference when using the product in a real-life setting outside of the usability lab.
  3. The color preference affects the user's overall perception of the product. Even if users actually do like, dislike, or feel some emotions related to the color, it doesn’t mean it will affect their overall perception of the design or organization to any meaningful degree. A background color is just one of many factors (such as usability, utility, informational value, and so on) that contribute to the overall user experience.

To meet the research goal in this example, the designer should instead run a desirability study; the 5-second test is a common example of this.

Solution: Choose the Research Method Based on Your Research Question

Your research question should always dictate which research method you use. For example, if you want to know whether users would be able to use a design, then you should watch them interacting with it instead of interviewing them. On the other hand, if you want to understand people’s thoughts, impressions, or perceptions of an experience, then there’s no better method than performing a well-run interview.

Few UX teams take the time to spell out their research questions. Doing so can focus the research and help teams understand which methods are the most appropriate to answer the said questions.

Issue #2: No Stakeholder Buy-in for Interviews and Their Findings

Often our stakeholders don’t fully understand the differences among various research methods. To them, it’s all research. This attitude can result in two issues:

  1. Teams are not given the time to do user interviews at the outset of a project because stakeholders think they already know enough from analytics or anecdotes from the sales department.
  2. Following an extensive discovery phase, interview-based insights are often received with skepticism and elicit dreaded responses such as “We already knew this” or “That can’t be right. My sister-in-law had a different experience.”

Solution: Educate and Involve Stakeholders in Understanding and Prioritizing Research Questions

Research is successful when stakeholders (i) understand the research goals, (ii) understand why using one research method as opposed to another is important, and (iii) are bought in from the start.

If you’ve identified stakeholders who could be opponents of your work, then a great way to convert them into supporters is to invite them to shape the research.

  • Invite your stakeholders to a workshop where you explain the research method and the importance of research questions. Then ask your stakeholders for their input in prioritizing your research questions. Stakeholders can also help by offering their own research questions.
  • Use dot-voting or a prioritization matrix to work with your team and stakeholders to rank the research questions in order of importance.

There are 4 reasons why including stakeholders in this way is beneficial:

  1. It’s more likely that the findings will be accepted as stakeholders have had a hand in shaping the research, and therefore feel some ownership.
  2. It brokers a strong relationship between the business and UX, as including them shows you want their expertise on the business perspective.
  3. Your stakeholders can offer ideas about other research questions you hadn’t thought about.
  4. It’s another way to educate people outside your team about the value that UX brings to the organization. A prioritization workshop can be an educational experience. By describing how useful different methods are — and when it’s okay to use them — your stakeholders will begin to understand that there’s more to UX than they initially thought.

Issue #3: Poor Planning

Too many people launch straight into interviews hoping they’ll get clarity as they conduct interviews with users. That’s an expensive way to waste participants’ (and your) time and collect unhelpful user data.

Without planning, interviews can often:

  • meander and produce findings which are superficial at best,
  • include poor questions, leading questions, or closed questions that never allow participants to give their honest thoughts or opinions about a topic, or to tell their story.

As a result, the data produced from those interviews is misleading or not informative for decision making.

Solution: Plan and Pilot the Interview Guide

Make sure you understand what you want to find out. After you spell out your research questions, put together an interview guide that will steer the conversation during the interview. The guide should have a few well-constructed questions. Aim for a few broad and open-ended questions to facilitate exploration, as opposed to a long list of closed questions.

Common questions used in UX interviews ask users to talk about problem spaces and experiences we want to learn more about. Such questions include:

  • Describe a typical day where...
  • Tell me about a time where…
  • If there was one thing you could change, what would it be?

Prepare nonleading probing questions. Examples of good probing questions include:

  • Tell me more about that.
  • Can you give me an example of that?
  • Why do you think that?
  • Why is that important to you?

Pilot your guide. First, pilot your interview guide on yourself to learn how you would answer the questions in your interview guide. Even asking yourself your own questions might make it clear that some of your questions don’t work. Second, before you run your interviews with lots of users, recruit one user for a pilot interview. Don’t tell that person that the session will be a pilot; rather, conduct the interview as though it wasn’t a pilot. And, just like a usability test, you’ll see quickly how you can improve the design of your interview guide before you interview again.

Plan a realistic schedule. Too many people schedule all their interview sessions back-to-back, with no break between them. Interviewing is tiring (much more so than moderating usability tests), and if you have no breaks in a day, it’s likely that your later interviews will be of poor quality. If you have the time, space your interviews out over a week (say 2–3 per day) instead of doing them all in one day.

Issue #4: Poor Analysis

Few teams take the time to analyze the results of the interviews properly, especially if they work in Lean or Agile. Unfortunately, when many interviews are done and not analyzed properly, it’s common to experience the following issues:

  • Only the memorable insights are reported; some important insights or nuances are lost.
  • The reported findings are colored by personal biases, as it’s easy to favor or recall insights that support pre-existing beliefs about your users.

Solution: Systematically Analyze the Transcribed Results

Record sessions.
Few teams record their interviews, and even fewer teams transcribe the sessions. If you interview a lot of people, it’s worth recording them. Even a good notetaker can miss things, as it’s virtually impossible to capture everything that was said verbatim. Most participants have no problem with audio recording the conversation, but always do get consent before audio or video recording.

Transcribe your interviews.
Analyzing transcripts is much better than relying on patchy notes or memory. You’ll stay closer to the data for longer, and you’ll be less likely to leap to conclusions that aren’t supported by the data. There are plenty of inexpensive AI-based transcription tools which will transcribe the audio recording for you in a matter of minutes if you're willing to do some manual work to correct transcription errors. Alternatively, you can pay a professional transcriber to obtain more accurate transcriptions.

Include the team.
Invite your team to help you analyze the transcripts. This approach is a great way for your team members to build empathy for the user while cementing what they learned from carrying out interviews.

  1. Block a half-day workshop where you give your team the transcripts to read and highlight.
  2. Remind your team of your overarching research questions.
  3. Rotate the transcripts between your team members, ensuring everyone reads each transcript and actively engages with the text.
  4. Following this activity, work with your team (or alone) to go back through the highlighted text and cluster quotes into meaningful groups, which will eventually become your themes.

Improve Your Interview Skills

Interviewing is a skill and requires practice and reflection to finesse. When you find a spare hour, review the interviews you’ve conducted. Which of your questions were not effective? Think about how you could improve the questions you asked and your overall interview style.

Invite a colleague you trust (or your mentor) to review your recordings or transcripts with you and to give you constructive feedback. This process takes time but is worth it because of the higher value you will gain from future studies.

Those who invest the time to rewatch or relisten to recordings and analyze transcripts of interviews they’ve performed will become better practitioners.

Learn more: User Interviews, Advanced techniques to uncover values, motivations, and desires, a full-day course at the UX Conference.