You‘re more likely to encounter problem participants in a remote unmoderated study, as compared to remote moderated or in-person usability testing studies — especially if you recruit from panels hosted by dedicated testing services.

It’s important to identify people whose behavior is not representative for your user population   and exclude their data from your analysis. (Testing representative users is one of the core principles of usability testing, and unrepresentative participants invalidate many of the findings from a study.)

In this article, we’ll discuss how to identify three types of problem participants: outliers, cheaters, and professional participants.

Outliers are participants whose behavior or performance is very different from the rest of your user population, either because they are not part of your target user group or because they are in exceptional in some other way.

Cheaters are participants interested only in getting paid and moving on to the next study. They may click randomly and not even attempt to perform the tasks.

Professional participants are people who participate in too many studies too frequently.  Often, these people are not representative of ‘regular’ users because they have seen too many UX-research studies and are too attuned to researchers’ goals.

Problem participants can be professional participants, cheaters, or outliers.

Note that cheaters are often outliers — in other words, people rushing through the test without really trying usually stand out from other participants in your data in some way. 

However, not all outliers are cheaters. Some users will behave differently from the rest of your participants because they are different, not because they are trying to cheat you for incentive money. (In past research we found that 6% of task attempts were uncommonly slow, which we explained by “bad luck” since we didn’t have a better explanation for these outliers.)

The best way to identify problem participants depends on whether you’re running a qualitative or quantitative usability test

Qualitative Studies: Watch the Recordings

Most remote unmoderated testing platforms record videos of the participants’ screens (and sometimes their webcams) while they perform the tasks. If you’re running a qualitative test with around 5–8 participants, then you should be planning on watching all of the videos anyway as part of your analysis. 

While you watch the videos, keep an eye out for signals that you might have a problem participant.

Outlier Signals

Watch for any comments or behaviors that could tell you that the participant has a different experience level, background, or motivation from the rest of your users.

For example, if you recruit industrial engineers, but one participant sounds very confused by the terminology used in the UI, he may not actually have a background in this field. If you didn’t ask the right questions in your screener to assess knowledge, he might be a participant with good intentions who just ended up in the wrong study.

Cheater Signals

Look for participants who don’t try the tasks at all. Sometimes you’ll even see participants who  receive the task instructions, don’t read them, and then go off and read Facebook or some other site for a few minutes, before clicking to advance to the next task.

However, just because someone is impatient with your design does not make her a cheater. Many users are demanding and expect products to work perfectly and easily on the first try. If your site takes forever to load, they may try to do something else meanwhile. There’s a difference between an impatient, demanding user and one who does not attempt to perform the task at all. 

Another signal for a cheater is that your participant ignores the task instructions or parameters. 

For example,  if the task is  “Visit West Elm and find a dining chair with a metal frame,”  but the participant picks  the first item (a coffee table) he sees (and says  he is done), then he may be simply trying to get out of the task as soon as possible.

Professional-Participant Signals

Professional participants are the most difficult to identify. These are people who will (in most cases) try your tasks and give you a lot of feedback. They are often extremely good at thinking out loud. They’ve done many studies, and they’ve learned what researchers want from them.

I tend to identify professional participants more easily by their comments than by their behaviors. Listen for any terminology that betrays too much knowledge that they might have picked up from participating in studies too frequently (“SEO,” “kerning,” “mental model,”, “menu bar,” “hamburger”).

(Note: Many people these days have learned terms like “user friendly,” “usability,” or even “user experience” from ads and popular culture, so that’s not always a warning sign.)

For example, while searching for information on a nonprofit website, one of my study participants said, “Look, the wizardry of using a search engine escapes a lot of people. Their inability to form queries makes them think that search engines are useless. In my experience, learning how to rephrase your question to get to the answers you’re looking for is critical to their use. So it’s not really easy or difficult, it’s more the user’s experience with the search function.” The amount of correctly used jargon made me suspect that this person might either be a professional participant or might have worked in a UX-related field.

Quantitative Studies: Start with Metrics

If you ran a quantitative study, with more than 30 participants, watching all of the videos may not be practical. You can use metrics to help you decide which videos to check. You can also spot-check each video (by watching several minutes of each one).

Most quantitative usability studies involve collecting at least two common metrics: time on task and task success. Remote unmoderated testing tools often collect these two metrics automatically as they run the test, so you probably already have access to them. Look at these metrics to identify responses outside of the normal range of your data. Check multiple metrics to help you decide if individual participants look suspicious, and then watch the videos for those participants to confirm that they are indeed nonrepresentative.

Note that metric-based methods are good for identifying outliers and cheaters, but not usually as useful for identifying professional participants. 

Time on Task

Look at frequency distributions of task times for individual tasks, as well as the total session time for all participants, to identify those moving much more quickly or more slowly than the rest of the participants.

The two participants who completed the task in less than 9 seconds were much faster than the rest. These might be cheaters. The four participants who completed the task in more than 179 seconds could be cheaters, unrepresentative participants, or just people who needed more time to complete the task. You’ll have to investigate to find out. (The histogram shows, for each time interval, the number of participants whose time on task was in that interval.)

When participants complete tasks and sessions very quickly, they might be cheaters. When they complete tasks and sessions much more slowly than the other participants, it’s possible that they’re cheaters, outliers, or neither — just people who need a little extra time or encountered an error.

Task Success

Similarly, we can look at task success by participant for each individual task as well as the success rate per participant over the whole session. Low success rates combined with very fast task times are usually a strong indicator of a cheater.

The same two participants who completed Task 5 in less than 9 seconds have a very low success rate across all tasks. Flag these participants and  follow up by watching their videos. (The histogram shows the number of participants who had success rates in the intervals shown on the x axis.)

Open-Ended Text Responses

When planning a quantitative study, it’s also always a good idea to include a question with open-ended text field, where participants have to type a response. You can quickly scan the list of responses and identify any “lazy” answers that might signal a cheater.

Let’s look at some real-life responses to the open-ended question “If you could change something about this website, what would it be?” 

   

Example Response

Description

Bad    

“Asiojdfoiwejfoiasjdfiasjdf”

These nonsense responses look like someone just slammed the keyboard to move forward with the study, without bothering to read the question and formulate a response. This is a strong indicator of a cheater.

  Questionable  

“Its fine”

Very short non-answers with incorrect or no punctuation, like this one, can sometimes signal a cheater, but not always. In this case, the participant may have been fatigued from a long study, or just didn’t have any strong opinions to share.

 Good   

“On the main page I would put more basic, compelling information about the ocean -- everyone [almost] has some connection to it, whether it be the beach as a kid, trade, kayaking, swimming, cruises, boats, etc. I would just stress if possible the amazing ocean animals/life and how important the ocean is to trade, military, fun etc.”

Detailed, thoughtful responses like this one are a strong signal that this is not a cheater participant.

Next Steps

When you identify problem participants, deal with each type in slightly different ways. 

Outlier

Outliers who are very different from your target audience should be removed  from your analysis entirely. 

Go back to your recruitment and try to determine how this person ended up in your study. What exactly is different about this person that makes her not fit with the rest? Was there a problem with your screener that allowed this person into the study? Learn from this mistake, to ensure better recruits in the future.

However, be sure that the participant truly isn’t representative of your user population. A UX professional once asked me if he could remove one participant’s data from his analysis, because she had given a very negative response while other participants were positive. I asked, “Well, do you have any other reason to believe she’s somehow unrepresentative?” He did not. An unfavorable response to our design is not a good enough reason to remove someone from the data.

Cheater

It’s often the case that a participant might “cheat” on one or two tasks, particularly at the end of a long session, but will actually try the others. Determine whether that’s the case with your cheater, by watching the participant’s full video. If the participant cheated only on one or two tasks, simply remove her data for that task. If they cheated through the whole session, remove them entirely.

Most remote testing tools are aware that cheaters are a problem. If you recruited the participants through the tool’s panel, many offer to replace cheater participants for free if you request it. 

If the recruiting service has a way for you to provide feedback about a participant’s performance, please ensure that you do so, as a courtesy to other researchers.

Professional Participant

Professional participants are sometimes trickier to deal with. In most cases, they haven’t done anything wrong — they showed up and participated, so they should be compensated and not reported or negatively reviewed.

The best thing is to avoid letting these people into your study in the first place. I always include a screener question that asks how recently the respondent participated in a study, and I exclude those who participated too recently (0–3 months or 0–6 months). However, there’s nothing to stop participants from lying in response to that question. Some testing tools allow you to filter out frequent participants automatically. However, these professionals are often performing tests on many different platforms (and there are a lot out there).

If you found a professional participant, look at the video and data to decide whether to throw out the session. Sometimes you’ll find that the participant made “professional” sounding comments, but the actual behaviors and data look very similar to the rest of your participants. In those cases, you can keep the data. Just make sure you flag that participant in your qualitative analysis and weigh that fact as you draw conclusions from that participant’s comments and feedback.

Conclusion

Keep any eye out for outliers, cheaters, and professional participants in your studies. Investigate multiple sources of information to help you sleuth out the situation, and decide what to do about it. If you frequently find too many problem participants in your studies, you should reevaluate how and where you recruit participants.

For more help planning, conducting, and analyzing remote studies, check out our full-day seminar, Remote Usability Testing.