Finding the right participants for a user-research study can be challenging, even if you have clearly defined your target audience. You can learn something from almost any user, but you’ll get more meaningful insights if your study participants have the same behaviors, interests, and knowledge as your actual users.

Using Screening Questions to Select Participants

To recruit study participants, you should ask screening questions which assess their background and characteristics. For example, if you’re designing a website about online games, you might need people who are interested in gaming. To assess this, you could simply ask them, “Do you play online games?”

But screening questions that make the purpose of the study obvious can backfire. If people can easily guess what the study is about, some will be tempted to exaggerate their responses just so they can participate (and receive the incentive, if the study offers one). When part of the screening process for a research study, the question “Do you play online games?” will clearly signal to respondents that the study is about gaming. It will be easy for people to guess that, if they admit that they do not play online games, they’re unlikely to be invited to participate in the study.

Therefore, screening questions have two conflicting goals:

  1. They must elicit specific information about users.
  2. They should also avoid revealing specific information about the study.

Achieving both goals is tricky, but it can be done by carefully preparing screening questions using two main techniques: open-ended questions and distractors.

Open-Ended Screening Questions

An open-ended question asks people to answer in their own words instead of choosing from a list of predefined answers. Since there are no answer choices provided, it’s difficult for people to guess which answer is ‘right.’

Open-ended questions can be used to elicit details about past experiences. Rather than asking whether someone plays online games, you can ask “What activities do you do online?” and select participants who mention games.

Open-ended questions are also good for assessing:

  • Disqualifying occupations: If there are specific categories of people you need to exclude from the study, an open-ended question will be better than a multiple-choice one with answers that list the excluded categories. For example, unless your design is for expert users, it’s best to exclude those whose job relates to the design you are evaluating, because their professional experience makes them too different from a normal user. If you’re evaluating a website about travel planning, people who work in the tourism industry would likely have a completely different perspective from the average traveler. The open-ended question, “What is your occupation?” is more likely to get accurate responses than a question which lists all the excluded occupations and relies on people to self-identify whether any of those describes their job.
  • Level of experience or interest: If you need to recruit people who are highly familiar with a certain topic, open-ended questions can elicit authentic details that tell you whether respondents have the relevant experience. If you ask people how often they play online games, it’s easy for someone who plays a few times a year to exaggerate and claim that she plays several times a week. But if you ask her to describe some of her favorite games and why she likes them, you will quickly be able to distinguish the hard-core gamer who can immediately list the names and details of many games from the person who can barely remember any. (If you are writing screener questions that someone else will be asking in an interview, request that respondents’ answers to these open-ended questions be written down word for word.)

However, open-ended questions alone are not enough to ensure effective participant screening, because they have some critical limitations. If you are recruiting for a very specific niche behavior, an open-ended question may fail to elicit relevant information because some people who do engage in the behavior may not mention it in their response to a general question. Also, in purely practical terms, responses to open-ended questions require more time to produce and also more time to collect and to analyze. Because the answers are free text, you need to read through each respondent’s statement and evaluate its meaning. This extra step may not be possible in some contexts (such as unmoderated usability studies, which are often designed to allow people to instantly proceed to the study after answering the screening questions, with no time allowed for a researcher to review the screener responses).

DON'T rely on 'yes or no' questions:

"Would you consider using a short-term scooter rental service again in the future?"

 DO ask open-ended questions to elicit authentic experiences:

"Please describe the last time you rented a scooter."

Distractor Answer Choices

Multiple-choice questions can be instantly evaluated, making them suitable for unmoderated recruiting; they also allow you to assess specific behaviors that people might not think to mention in an open-ended response. But, it’s still important to avoid revealing the purpose of the study. To do so, borrow a technique used by teachers for multiple-choice tests: include distractors among the answer choices. Distractors are incorrect answer choices which camouflage the right answer by surrounding it with incorrect responses. Good distractors look like they could be correct responses — they help distinguish between people who truly know the correct answer, and those who are just guessing and are likely to pick one of the appealing distractor answers.

You can incorporate distractors into multiple-choice screening questions by providing answer choices that include both your target response and several equally plausible responses. The distractor answers should be realistic both as activities that people would do and as research topics for an organization. For example, if you’re recruiting people who are interested using a scooter-rental app, you might ask about scooters and include walking, ride sharing, and taxis as distractors. But hang gliding would not be a good distractor, because it’s obviously not a reasonable transit option in an urban area, and also not a behavior so common that companies would conduct research about it.

For some questions, it will be appropriate to allow people to select more than one answer. Make sure to exclude respondents who select all the answers to one question — (especially if they do so repeatedly for several questions). Selecting all choices is a warning sign that the person may be too eager to participate, and it would be safer to choose someone who selected fewer choices.

If you’re using a testing platform which allows asking only one or two screening questions, even a single, well-written screening question with good distractors can be effective at identifying which potential participants match your target audience.

DON'T ask easy-to-guess multiple choice questions:

If you needed to get to a meeting on the other side of downtown, about 2 miles away, which of the following would you consider doing to get to your meeting?
A. Walk
B. Hang glide
C. Rent a scooter

 DO use plausible distractor answers to conceal the subject of the study:

If you needed to get to a meeting on the other side of downtown, about 2 miles away, which of the following would you consider doing to get to your meeting?
A. Walk
B. Rent a bike
C. Rent a scooter
D. Take an Uber

Conclusion

Finding the right test participants is important for any user-research project, but it becomes essential when you need to identify a specific type of user from within a large pool of general consumers. Screening is especially important if you’re looking for:

  1. Potential future users, who aren’t yet customers, but could realistically become customers in the future
  2. Highly motivated users, whose interest in a particular topic or activity is so strong that their knowledge and behavior significantly differ from those of the  ‘average’ person

These types of users are often desirable target audiences, but the attitudes and experiences that make these audiences valuable are often difficult to assess accurately with multiple-choice questions. Carefully planned screening questions discourage exaggeration and guessing, and identify those users who truly fit your target audience.

Learn more about writing screening questions and other tips for recruiting user-research participants in our free report How to Recruit Participants for Usability Studies. This is also one of the topics covered in the full-day course on Usability Testing.