Writing good digital content requires a deep understanding of who your users are, how they think, and what they know. Testing your product’s content with users can help you to determine whether:

  • Your users can easily understand and process the information
  • The content has the tone of voice you predefined
  • There are jargon terms that need to be explained

You can evaluate your content using a variety of methods (including eyetracking and cloze tests), but our favorite way is through usability testing. A content-focused usability test can work much like any other such test, but there are some nuances to consider when the primary goal is evaluating digital copy.

Test Structure & Facilitation

Learn About the Topic and Content

As the researcher or facilitator, you should be extremely familiar with the content you will test and with the domain it belongs to. This is particularly important for people working for agencies, since they may be new to the content area. 

For example, let’s imagine we were hired to test the content on Investopedia.com, a site that provides investment news as well as explanations of complex financial concepts written for different experience levels.  We’d need to start by spending hours just to explore the site: learning the types of content offered, the target audience(s), and as much as possible about the content. This latter type of knowledge would be particularly important if we weren’t already very familiar with finance or investment.

We’d want to also spend time with the content creators, as well as subject matter experts. As a researcher, you don’t need to become an expert in the topic (investment), but you do need to have a rough idea of what your participants are reading.

Use Moderated Instead of Unmoderated Studies

In remote unmoderated studies, participants work on their own, with no facilitator present. Even though this variation of usability testing is cheaper, we recommend that you do not use it for content studies. When trying to discover how people research a topic, compare offerings, and make decisions, the best approach is to conduct a moderated studywhere a facilitator is present (physically or remotely). 

Facilitators can ensure that participants process the content naturalistically instead of approaching the task superficially. 

Content studies tend to have long stretches of time when the user is simply scanning page after page—in silence. When left alone (such as in a remote unmoderated test), participants may feel awkward and wonder whether they’re helpful. Without proper feedback and reassurance, participants may rush through the test and approach the task in  superficial manner. This behavior is often reinforced by the shorter session times common in unmoderated testing (typically 20–30 minutes).

Additionally, having a facilitator enables specific, personalized follow-up and clarification questions, such as “I noticed you hesitated on this paragraph, can you tell me what you were thinking?” 

The table below contains more examples of valuable follow-up questions for content studies.

Follow-Up Question Goal
What did you think about that information?

If you could change anything about that information, what would it be?

What was easy or difficult to understand, or why?
Encourage participants to share any problems or issues they noticed with the content
What does the word [X] mean to you? Determine whether a technical term makes sense to participants or if it’s jargon and needs better explanation
If you had to explain this to a child, what would you say?

Can you please summarize that information in your own words?
Evaluate whether participants understood what they just read

(If they can easily and correctly summarize the content in their own words, it’s a good indication that they understand it. But if they have to look back at the text and read verbatim from it, they probably don’t have a clear understanding.)
Imagine a person said these words to you. Who would that person be?

What would they look or act like?

What job would they have?
Subtly prompt the participant to describe the tone of voice of the content

Note: Don’t use the word “content” when speaking with study participants — users don’t typically have the same associations with that word as content professionals.

Be Comfortable with Silence

Being comfortable with silence is important for any kind of facilitation, but it’s especially necessary for content testing.

Expect long stretches of quiet time while the participant focuses on processing the information. Don’t appear impatient. Avoid being interruptive or fidgety. Injecting too many questions while users work breaks their concentration and alters their behavior. 

If you need to ask a question mid-task, keep it neutral, such as “What are you thinking?” or “What are you looking for?” Once users answer, let them continue. Resist the temptation to blast questions. Wait until the participant has finished reading and is ready to provide feedback. 

Participants & Tasks

Recruit the Right Participants

You should always aim to test your designs with representative users. However, when testing content, you should  take extra care to recruit the right participants.

The people evaluating your content should truly be representative of your user population: they should have the same mindset, situation, and user goals — especially  when your tasks are content-rich, research-intensive activities.

In other words, the scenario that you give people should match a problem they need to solve in real life. Unlike UI-focused studies, content-focused studies should not ask test participants to “pretend” or “imagine” to be in a situation. The risk of invalidating the study by using the wrong participants is higher for content studies because the participants’ motivation and background knowledge is much more important for obtaining accurate insights.

Consider the National Cancer Institute, a medical reference site that describes various forms of cancer and their treatment. Some of the content is intended for patients and some for healthcare professionals. 

People who have been diagnosed with a serious medical condition are more likely to relate to the content accurately than someone asked to pretend to be interested about a disease. Beyond their different level of emotional involvement, patients  may know more about the disease from speaking with their doctor or doing their own research. In this situation, it may be acceptable to also recruit the primary caregivers for someone who was diagnosed (for example, a person whose partner had the disease), as long as that person was highly involved in the care.

To test this article about adult non-Hodgkin lymphoma on the National Cancer Institute’s site, we’d need to recruit people who had been diagnosed with the disease or were the primary caregivers for someone who had it.

It is impossible for proxy users to instantly acquire knowledge or know the situation well enough to assess the value of the content — especially when the content is scientific or technical.

In our Investopedia example, to evaluate most of its content, we would need recruit people genuinely interested in learning about investment. The odds are very high that a random person will not have the background knowledge to understand or enjoy an article titled “Understanding Preferred vs. Common Stock.” Even if the content works very well for a specific audience, it isn’t likely to be valuable for everyone in the world.

The challenge of content testing is precisely that — whether content works well depends so heavily on who it’s written for.  

Tailor Tasks to Individual Participants

In most traditional usability studies, researchers follow a prepared script and give study participants preestablished tasks to perform. Content testing often requires flexibility to ensure that each individual gets the right task.

It’s OK to prepare some generic tasks prior to the study, but be willing to modify or craft new ones on the spot as you learn more about the participant’s situation and as the session unfolds. You want to give participants the freedom to research a topic as they please, so you uncover what’s important. Don’t force an unrealistic task. The more pertinent the content tasks, the more natural people’s behaviors are when completing them.

The best results occur when study participants forget about the testing environment and immerse themselves in the activity. Participants can sometimes “fake” their way through simple pass–or–fail activities (e.g., “Find the contact name for Press Relations”), but such is not the case for exploratory tasks where having a scenario that precisely matches the person’s current situation and emotional state is critical. 

In the Investopedia example, the site’s huge collection of articles targets people with different financial expertise, from complete investment beginners (“Investing Essentials”) to intermediate or advanced users (“Guide to Technical Analysis”). Some of the specialized topics  may interest only a subset of its audience: someone who is interested in learning about how to invest in index funds may not be interested in learning how to trade options.

In a test of the site, we might spend time at the beginning of each session (or schedule a prestudy interview) to discuss the participant’s situation and make sure the task scenarios match each individual’s exact circumstance.  For example, we might ask:

  • How long have you been investing in the stock market?
  • What types of investments are you familiar with? (Stocks, mutual funds, options, etc.)
  • What types of investments are you interested in learning more about?

We might even gently quiz participants, asking them to define various concepts, to assess their domain knowledge. 

These questions would give us a sense for the participants’ experience level and interests. In addition, it could help us avoid presenting them with tasks that are uninteresting, irrelevant to them, or beyond their individual ability to understand (because they were written for a different experience level).

Content Testing Often Requires Open-Ended Tasks

To test content properly, write open-ended information-seeking tasks around the content of interest.

Unlike specific, directed tasks (e.g., “Find the opening hours for the Fremont public library”), open-ended tasks don’t have a definitive answer, but are meant to assess content quality and relevance. Use them to learn how people explore and research, what questions they have, how they expect information to be communicated, and whether your site meets their needs.

Open-ended tasks have vague end points, often making participants wonder how to best spend their time. At the beginning of each session, tell people to work at their own pace, as if they were by themselves, and not to worry about the time.

For example, to evaluate the Investopedia article “Smart Strategies for a Bear Market,” we should not take participants to that page and just ask what they think about it. That isn’t how people arrive to a content page. We must give them an information need such as: 

“The stock market has been declining recently and seems like it will continue to do so. You’re looking for advice on how to invest during a market downturn. See if you can find advice on Investopedia.com.”  

To test the Investopedia article “Smart Strategies for a Bear Market,” we have to write an open-ended task that might lead participants here.

Consider competitive testing. Sometimes you can get insights into your users’ needs by allowing them to search freely on the web or by letting them visit competitors’ sites rather than restricting them to your own site. Don’t worry that you’re wasting precious testing time: if users are truly representative, the insights will often be revelatory. And you can always limit the free exploration to a small part of your session.

Conclusion

When the focus of your study is to evaluate content, you may need to adjust traditional research techniques to learn how to improve your content and to ensure valid results. Take extra care to give participants tasks that are realistic and match their current situations, interests, or needs.

We have run hundreds of content studies with the methods discussed here, as well as eyetracking research. Check out our course, Writing Compelling Digital Copy, to learn what we found.