Most design teams spend appallingly little time observing their users' actual behavior. Of course, most companies never perform user testing at all, but even those that do log very few hours of behavioral observation.
The ideal is to designate one day per week — say, Wednesdays — as "user day" and bring in four users. New users, of course. It's one of the guidelines for recruiting test users to use fresh users for each test (actually, it's seven guidelines, because the general rule has a few exceptions where it's OK to reuse participants).
With a steady stream of new users coming through, you can observe people using all of your design features, have them test out wild ideas in paper prototype form, and see how they use your competitors' designs. With weekly testing and careful observation, you'll make very few design decisions that aren't grounded in users' real needs and behavior.
Although weekly testing is the best, it's almost never done. I know of only a handful of projects that have kept up this pace, and some of those are my own :-(
In most companies, it's a rare and wonderful experience to have actual customers show up to use the design. You should obviously make the most of this opportunity, but companies often waste too much of their precious time with users.
Top Time Wasters
The typical user test is 60–90 minutes. After that, users get tired, and it's difficult to run usability sessions that last more than two hours. (It's possible to run multi-day studies, but doing so requires a different technique and happens rarely in my experience.)
So, what should you do with the 60–90 minutes of user time? Focus on having the users behave naturally while interacting with your interface. That's the unique contribution of user testing: we get to see what people actually do, as opposed to what they say.
Sadly, many companies waste minute after minute trying to get test users to voice opinions rather than perform tasks. There are many great methods besides user testing for collecting commentary and opinions — focus groups and surveys are two of the more common ones. Most likely, your company already collects lots of opinion data from many more customers than you'll ever have in one-on-one user testing situations. Use each method for what it's good at.
Common time wasters include:
- Starting the session with an extensive demographic survey. It's better to collect this data using an online website survey. For test participants, make do with the data collected during the up-front screening process and refrain from asking additional questions during the test.
- Asking users for subjective satisfaction ratings after each task. Subjective ratings are weak data in the first place. Research projects aside, overly fine-grained ratings are rarely worth the time required to collect them.
- Using a satisfaction questionnaire with dozens of questions instead of a single overall satisfaction score. It's stupid to ask users to rate, for example, how much they like the graphics. If people have strong feelings about how something looks — whether it be pleasing, ugly, or inappropriate — they'll voice those feelings during the task. The one thing a questionnaire should ask users about is overall satisfaction. Detailed issues are much more valid if you assess them based on users' behavior while performing tasks, rather than asking for a retrospective rating.
- Ending the session with a long discussion about how users might feel about potential new product developments. Again, focus groups are better for this. Also, users' reactions to prototype designs while performing tasks are much more valid than people's hypothetical speculations about what they might like. Spend your time collecting valid data rather than speculative data, even if doing so requires you to mock up a few more pages.
Each of these wasteful steps takes only a few minutes. Together, however, they can amount to users spending as much as 30 minutes filling in questionnaires rather than performing tasks.
Maximizing Insights
Some overhead is inevitable in any test situation: you've got to welcome users, have them read and sign a consent form (ideally, a short one), and then debrief them and hand out incentives when the test is over. Try to keep such overhead to five minutes, then budget maybe another five minutes for collecting the most necessary subjective satisfaction ratings. By doing so, you'll waste no more than 11% of a 90-minute session.
If you spend 30 minutes on focus-group-style data and 10 minutes on general overhead (courtesy of endless bureaucratic forms or the like), you'll waste 44% of a 90-minute session.
Make a budget for your user testing time and ensure that you devote the bulk of it to observing users as they perform interactive tasks with your design. This is your project's best use of testing resources: the most valid and actionable usability insights come from observing behavior. It's also better for your career development. To increase expertise in the usability field, you must observe the widest possible range of user behavior.
How many hours per year do you spend observing actual users performing tasks with your designs? If it's less than 20 hours, you need to do more user testing. You should probably also reprioritize the time budget for the test sessions you do run so that they'll generate more behavioral data.
Share this article: