When people read the information on a webpage, they don’t distribute attention uniformly: some content gets more than other. Indeed, if you ever run or look at results from an eyetracking study, you will see that certain areas of the screen tend to receive more gazes than others.

Is it good or bad if people glance at a piece of content more than at others? Are they confused by it or are they engaged with it? In eyetracking research, we can tell the difference. We observe that people spend more time on a certain page segment in three circumstances:

1. Exhaustive review. During exhaustive reviewthe eyes are drawn to the same area of the page repeatedly. In other words, the person looks at an area, looks elsewhere (maybe on the same page or even on another page), and then returns to looking at the original area. In more extreme cases this back-and-forth is repeated multiple times.  

In our 2009 “Eyetracking Web Usability” book we described exhaustive review as “unconstructive combing of pages and menus.” Exhaustive review happens when people are confused because the page content or the page UI violates users’ expectations. Only when users think that the site is the best place to do their task do we see exhaustive review; otherwise, if they have doubts about the site and feel confused, they simply leave. 

In these situations, users’ confusion reflects an “I can’t believe it’s not there” feeling. People are so convinced that the content should be in an area, that they return to it multiple times because they believe they must have made and error and missed it.

2. Desired exploration. Sometimes people look at the same content multiple times not because they are confused, but because they enjoy it or it has many nuances that are relevant to them. Desired exploration occurs when users refer to the same piece of content multiple times because they are interested and highly engaged with it.

For example, imagine a person using a website to change the filter in her lawnmower. She will move attention back and forth between the physical lawnmower and the instructions containing a diagram of the equipment, to make sure that she is on the right track as she is doing the task. Looking at the instructions multiple times is necessary to complete the task.

Or, imagine a traveler who is going to Hungary for the first time and planning to sightsee. He finds a filmstrip of images of Budapest and scrolls through them slowly. Then he revisits some of the images again, because they intrigue him, and he wants to absorb them better. He is enthusiastic, not confused.

3. Necessary review. We know that web users often scan content to locate information that is relevant to their goals. When something looks promising, they tend to read carefully to determine whether it answers their need. Or, they may spend more fixations on a particular sentence that summarizes an important concept. That is necessary review. It occurs when the eyes fixate on something once, or very few times. The fixations serve to cement the understanding of the text and get a sense of the underlying meaning.

The existence of these three repeated-looking patterns shows that one should not rely on simplistic analysis of eyetracking data to count fixations and declare success if a page element gathers lots of looks. We need to dig deeper and find out whether thosee extra looks really mean.

In this article we focus on exhaustive review.

Why Exhaustive Review Occurs

Why do people turn back to a certain screen region? What makes them believe that they must have missed something and why do they expect that the content they’re looking for should be there? There are two main reasons:

  1. Like information is expected to appear together. People assume that similar content and UI elements will be placed together on the page.
  2. Common web-design patterns and how most other websites are designed set user expectations for where UI elements and content will appear.

Like Information

I recall witnessing the first profound example of exhaustive review in 2005. The user was trying to find which sport and position George Brett played. (I had a crush on the American League slugger from about 1982-1992. As homage, sometimes he makes an appearance in my usability-test tasks.)

The user did a Google search and chose one of the first hits, a page about George Brett on a site called Baseball Reference. She quickly determined that George Brett played baseball. Next, she had to find his position.

She first looked in the main content area, a grey rectangle with text, positioned in the upper left, just below the navigation. Here was his full name, that he batted right and threw left, weighed 200 pounds and stood at 6 feet tall. It included important dates, like his birthday, major-league debut, and when he was inducted into the Baseball Hall of Fame. But, this grey rectangle did not include his field position.

The user read everything in the grey section, then looked to the blue section on the right. When she learned that Brett’s field position was not there either, she looked at the grey section again even though she had already read it all. No luck this time either, so she moved, to the yellow section below. When she didn’t see the position there, she looked at the Batting table below, then up to the blue section again, and then once more to the left to the grey section. There, she read quite a lot — again.

Baseball-reference.com: The blue dots in the gaze plot display one participant’s fixations, numbered in chronological order. Even after carefully reading all the information in the grey area, she returned to it a few times, looking for the answer that she expected to find there and wasn’t.

When we first noticed this pattern, we thought it was probably just an anomaly. But then several users attempted the same task and their eyes followed the same path, gravitating again and again to the grey area on the page. Why?

  1. Context. The users, whether baseball fans or not, considered that a player’s position would be one of the most basic pieces of information about the player. All basic information about the player was consolidated in that grey rectangle, so the field position should have been there too.
  2. Additionally, that rectangle was in the top left of the page’s content area, which is often a priority spot in a webpage layout that people notice first and attend to carefully.

You can almost hear users saying, “It must be here,” when they return to the original section during exhaustive review. 

Fast forward to our most recent eyetracking study, where we looked at both desktop and mobile websites. One of the mobile tasks was to buy a Vespa that goes at least 50 miles per hour and costs $5K or less.

The participant was comparing 2 scooter models on the Vespa website. He read all the Fuel and Speed details of each model — fuel consumption, top speed, engine, tank capacity. He then moved to the Dimensions area, scanned, and scrolled down further. The list of links seemed to indicate that he was no longer looking at details about these 2 Vespa models. He scrolled to the end and when he saw the social icons, he immediately scrolled up to the top of the page and said, Hmm, where’s the price?

There, he tried tapping one of the models, presumably to get more details about it, but nothing happened. He scrolled and scanned the titles of the same sections of the page again, and read the menu below more closely; then scrolled to the end of the page, shook his head “no,” and scrolled to the top of the page yet again. Once more he scanned the details, even though he had already looked at these sections fully, and said, I am expecting to have prices too. 

The top half of a page (left) and the bottom half of the same page (right) on the mobile version of Vespa.com compares two Vespa models.

 

Watch the video of this user interaction in our eyetracking study:

This video was recorded through special eyetracking glasses worn by the user. The red circle represents where he is looking. When he shakes his head “no,” this appears as a vigorous panning of the video image in the recording.  (In most browsers, hover over the video to display the controls if they're not already visible.)

Common Web-Design Patterns

There are no true, written web design standards, but there are some common locations and designs for certain elements. Let’s consider one of these now with a little experiment: Find Nielsen Norman Group’s global navigation.

Did you immediately look to the top of the page? It would not be surprising if you did because the main navigation on desktop traditionally appears as a horizontal list of links across the top of the page. On mobile, too, the main navigation usually appears at the top, although it is often collapsed under a button (with an icon, text, or both).

We conducted a version of this experiment with more than 200 users and eyetracking technology. We tested more than 40 desktop websites and asked people to locate various UI elements. We then launched the site and observed where people looked for them. To prevent bias, we gave each user a random subset of tasks, and also randomized the order in which people had to do these tasks.

The position of the company logo was one of the elements we studied. A logo is present on most web pages, and is usually located in the upper left corner in desktop designs. Consider 4 of the sites we studied: National Education Association, Dell, the Florida Keys & Key West, and Kohler. The first two of these displayed the logo in the upper left, and the others showed it centered at the top.

Both www.nea.org (left) and www.dell.com (right) positioned the logo in the upper left corner of pages.

 

www.fla-keys.com (left) positioned the logo, or site title, in the center of pages, as did www.kohler (right). Both sites had elements that might be mistaken for a logo in the upper left.

Users who were asked to find the logo on the NEA and Dell sites found it immediately. They first looked to the far upper left, the common position for logos, and located it in a fraction of a second. Once they found the logo position, they spent more fixations on it to parse it and confirm that it was the answer they needed. This is an example of necessary review.

On the NEA site, the red heat concentrated on the logo shows that people found the logo without looking in multiple places. 30 users located it within just a fraction of a second.

 

On the Dell site, the red heat concentrated on the logo shows that people found the logo without looking in multiple places. 30 users located the it within just a fraction of a second.

But on the sites where the logo was positioned in the center, users took at least a second and sometimes a few seconds to find the logo, wasting time and fixations.

On the Florida Keys site, several people looked first to the far upper left, at the map icon. Then their eyes moved to the temperature — the closest bit of content, which appeared still in the upper left area of the page. Next, they went back to the map label. That is an example of exhaustive review, because, even though users had already determined that the item in the far upper left was not the logo, when the second-best candidate for the logo was also not the logo, they looked at the first element again.

Next, participants moved their eyes to the small grid of social icons to the right of the temperature, and, when they determined that it wasn’t the logo, some went back to the temperature once more, in the hopes that it was the logo even though they had already seen that it said 81 degrees. Finally, participants looked at the centered text, Florida Keys and Key West, and stopped there when they determined that they had found the logo.

Unlike for NEA and Dell, where most gazes were concentrated on the logo, the heatmap on the Florida Keys site showed fixations not only on the logo, but also in the upper left corner (where the logo was expected to be). This type of heatmap pattern indicates that people looked in multiple places before they found the logo. 30 users took at least a second to find it.

On the Kohler site, participants’ eyes gravitated towards the upper left area containing the KOHLER Worldwide icon and dropdown, and the BATHROOM menu. Some alternated looking at each of these items a few times (also an example of exhaustive review), before eventually moving their gazes to the right and settling on the tagline The Bold Look of KOHLER in the center. After examining this element, not sure if it could be considered a “logo”, a few participants went back to some of the items on the left again.

Like the Florida Keys site's heatmap, the Kohler site's heatmap showed fixations not only on the logo, but also in the upper left corner (where the logo was expected to be). People looked in multiple places before they found the logo. 30 users took at least a second to find it.

Whether the company names or taglines are logos is a separate discussion. The point of this example is: If people expect an item to be in a certain place, they will look there first, and may return to looking at that place even after they have seen that the item they want is not there.

How to Avoid Exhaustive Review

Avoiding exhaustive review means knowing what users want and where they expect to find it. Following generally accepted UX design and research practices is the best way to combat exhaustive review.

In particular:

If you’re considering a design that deviates from web conventions, test it thoroughly and use it only if all your usability research indicates that it is dramatically better than the norm. (If your proposed design is just a little better, don’t use it, because the penalty will be higher than the gain.)

Summary

Let’s be clear: exhaustive review is a user behavior to avoid. Looking exhaustively is exhausting.

You can see exhaustive review in action in eyetracking studies. And, if your analytics data or usability tests show that people spend a long time on your page, it may be because they are engaged with the content or because they are puzzled by it. To help determine if exhaustive review is the culprit, ask yourself these questions:

  • Have we positioned something in a non-standard place?
  • Have we omitted information users might expect to find?
  • Have we placed highly-related content in different places?
  • Does the content placement reflect content importance?
  • Do users say (in usability tests, feedback forms, or support correspondences) they expect or are looking for something they cannot find?

If the answers are inconclusive, consider doing think-aloud, moderated usability tests (remote or in-person), or embark in your own eyetracking usability study.

 

For more about the behaviors discussed in this article, consult our “Eyetracking Web Usability” book and the  “How People Read on the Web: The Eyetracking Evidence” report.