One of the biggest causes of user failure is when users simply can’t locate stuff on the website. The first law of e-commerce design states, “if the user can’t find the product, the user can’t buy the product.” So these design flaws are not just usability problems, they’re often a site’s biggest profitability problems as well.

Findability and Discoverability Issues

When site visitors routinely conduct searches for content that should be easily located via browsing or when there is insufficient within-site traffic to mission-critical pages, the site may suffer from low findability and discoverability.

Findability: Users can easily find content or functionality that they assume is present in a website.

Discoverability: Users encounter new content or functionality that they were not aware of previously.

High findability and discoverability are the results of a well-defined information architecture and well-designed navigation system. The challenge with findability and discoverability issues is determining the root cause: is it the information architecture or is it the navigation design? Here are 2 examples to illustrate the difference between IA and Navigation/UI issues:

Problem Example 1: Site visitors are not visiting two important sections of the site.

Potential issues that can cause the problem:

IA-issue: Users do not understand or are not attracted to the names of the sections.

UI-issue: Users do not notice the links to the sections.

Problem Example 2: Site visitors never use a Related Links navigation component on content pages.

Potential issues that can cause the problem:

IA-issue: The content links included under Related Links are not relevant to what users need (a classification issue).

UI-issue: Users do not notice the Related Links navigation component (because perhaps it’s too low down the page or is mistaken for an advertisement).

The cost of guessing the cause of issues can be very high. It would be a shame to spend money to redesign an entire user interface only to discover that the underlying IA is the issue or vice-versa. With limited resources and time, knowing the root cause is priceless. All of the methods that we recommend can be executed quickly, remotely, and without moderation (unless desired). There are no excuses for not testing.

Employing Multiple Methods to Determine Cause

The key to identifying the true cause of a problem is to combine multiple testing methods. By running separate studies to measure (a) the information architecture (IA) and (b) the user interface (UI), we increase the likelihood of correctly identifying the cause of website failures.

The 4 methods described below answer different questions (IA- or UI-focused) and provide results that are either qualitative or quantitative (or both).

(*) Usability testing is usually qualitative, but can be quantitative with some extra effort or by using online tools for unmoderated quantitative testing.

1. Tree Testing

A tree test is an IA-focused technique conducted to determine if mission-critical information can be found in the site’s IA. It does not display the user interface to test participants; they navigate using only link names.

Questions it answers:

  • Are the names of categories understandable?
  • Do the category names accurately convey content?
  • Is content categorized in a user-centered manner?
  • Are content titles distinguishable from one another?
  • Is information difficult to find because the structure is too broad or too deep?

Setup and testing:

To set up a tree test, you create an information-architecture “tree” which delineates the groupings and hierarchy of pages (you can create this in a spreadsheet and paste it into the system that you use for testing). You then create specific tasks that involve finding specific destinations (called “end nodes”) of the information architecture (e.g., “Find a health insurance plan that covers a family of four and costs less than $500 per month”). Study participants conduct the tasks using the tree.

Results:

The results are quantitative and include, but are not limited to:

Direct success rate: How many participants found the right answer without having to go back up and down the tree?

Indirect success rate: How many participants got the right answer, but had to navigate back up and down the tree to find it?

First-click data: Which tier-1 categories did users click first? First clicks are a good indicator of the strength of category names.

Tool: Treejack, UserZoom Tree Testing, Userlytics Tree Testing

Treejack tree-test interface for participants: A task is summarized at the top of the screen and participants must navigate the label names illustrated in the tree to find the desired information.

 

Treejack tree-test summary results for one task indicate direct success, indirect success, and time taken.

2. Closed Card Sorting

Closed card sorts are an IA-focused method conducted to evaluate the strength of category names.

Questions it answers:

  • Are the names of categories understandable?
  • Do the category names accurately convey content?
  • Is content categorized in a user-centered manner?
  • Are content titles distinguishable from one another?

Setup and testing:

To conduct this type of test, you provide participants with “cards” including names or descriptions of content/functionality. Then they must assign these cards to your categories. (Such closed card sorts are the opposite of traditional open card sorting where users get the same stack of cards but have to create the categories themselves.)

Results:

The results are both quantitative and qualitative, and include:

Similarity: Number of times the same content was grouped together

Standardization Grid: Number of times a card was sorted into your intended category

Logic of assignment: With card sorting, it’s recommended to moderate a portion of your tests in-person or remotely via teleconference. This allows you to interview users to understand why they grouped certain content together, why they assigned items to particular categories, and how they interpret the category names.

Tools: OptimalSort, Usabilitest Card Sorting, UserZoom Card Sorting, Userlytics

OptimalSort’s interface for closed card sort: The “cards” are on the left, the categories are in the body of the page. Participants drag cards onto the categories to conduct the sort.

 

OptimalSort: Standardization Grid for closed card sort illustrates how many users assigned the cards to each category. If most users picked a different category than what you had intended for a given card, then it’s time to reconsider your IA structure.

3. Click Testing

Click tests are UI-focused; they are conducted to determine where users click on the interface to find specific information or functionality. One drawback of click tests is that they are not interactive: participants are shown static images of a site and must show where they’d click to perform a task. However, once they click, the task is considered finished and they can move to the next task. To test interactive elements, you should conduct usability testing.

Questions it answers:

  • Which navigation components are utilized?
  • Which navigation components go unnoticed?
  • Which navigation components are avoided?

Setup and testing:

To conduct this type of test, you upload a screenshot, wireframe, or sketch of a page into a click-testing tool. You then create tasks. Participants must click on the image to indicate where they would go to conduct the task.

Results:

The result is a heatmap illustrating where users clicked. The heatmap helps you determine if the navigation design is noticeable or if elements are conflicting and create too much noise.

Tools: Usabilla First Click Tests, Chalkmark, UserZoom Screenshot Click Testing

Chalkmark’s click-test interface for participants: A task is summarized at the top of the screen and participants must click on the image to indicate where they would go to conduct the task.

 

A click-text heatmap shows where users clicked
Chalkmark’s click-test heatmap illustrates where users click to conduct each task in the study.

4. Usability Testing

Usability testing is conducted to determine how and why users navigate a website (or a website prototype) to conduct tasks.

Questions it answers:

  • How do users find information?
  • Which navigation components are utilized?
  • Which navigation components go unnoticed?
  • Which navigation components are avoided?

Setup and testing:

To conduct this type of test, you can use prototypes (paper or interactive) or a live site. You then create tasks and ask participants to carry out the tasks. You observe users conducting the tasks and note when they interact with navigation components, how they interact, and if they avoid or overlook navigation. You can conduct usability testing in-person or remotely. Remote usability testing can be moderated live (via teleconference) or unmoderated using a variety of online services. (Many options, from established services like UserZoom and WhatUsersDo to startups like YouEye.)

Standard user testing requires no further equipment than a live user and a computer. (Or a piece of paper in the case of paper prototyping.) However, if budget allows, you can run the study with an eyetracker which may help if you’re particularly concerned about whether users ever even see the desired navigation components.

Results:

The results include task success rate and difficulty ratings, identification of interface elements that cause friction, and better understanding of the users’ mental models of the site.

Tools: In-person testing, remote moderated tests with services like GoTo Meeting or Webex, remote unmoderated testing tools

Identifying the Cause Is Key to Successful Remediation

When running multiple types of studies you may find positive results from one test and negative results from another. Inconsistent results from different types of studies precisely illustrate the value of running multiple tests to focus on both IA and UI. For example, you may run a closed card sort and find that users have no issues assigning subcategory content into your global categories. However, a click test (of the same environment) may result in painfully low success rates, with users clicking in all the wrong places when asked where they would go to conduct mission-critical tasks. Thus, put together, the 2 tests indicate that your category names are fine, but your layout is problematic and that your best investment is in designing new layout configurations.

Low findability and discoverability is a panic-inducing problem which can lead to knee-jerk uninformed and unsuccessful fixes. It’s not uncommon for teams to have limited time to conduct studies, which is why these four methods are so useful: they can be set up quickly, run simultaneously, and conducted remotely—without any moderation if so desired. Therefore, combining two or more of these tests is a reasonable monetary and time investment, as it can help you hone in on the cause of your problems and will mitigate the risk of investing in an expensive nonsolution.

Please note:

We have listed a variety of tools for the research methods discussed in this article. While we recommend the methods, we don’t specifically recommend any individual tools. As a vendor-neutral organization, Nielsen Norman Group doesn’t endorse such products, and the one to use on any given project would depend on your specific needs and budget. We are happy to provide consulting on such questions, but the answer would be different in each case.