How gullible are Web users? Sadly, the answer seems to be "very."

Professor Thorsten Joachims and colleagues at Cornell University conducted a study of search engines. Among other things, their study examined the links users followed on the SERP (search engine results page). They found that 42% of users clicked the top search hit, and 8% of users clicked the second hit. So far, no news. Many previous studies, including my own, have shown that the top few entries in search listings get the preponderance of clicks and that the number one hit gets vastly more clicks than anything else.

What is interesting is the researchers' second test, wherein they secretly fed the search results through a script before displaying them to users. This script swapped the order of the top two search hits. In other words, what was originally the number two entry in the search engine's prioritization ended up on top, and the top entry was relegated to second place.

In this swapped condition, users still clicked on the top entry 34% of the time and on the second hit 12% of the time.

The Top Hit's Attraction

This study goes far to address why users tend to click on the top hit. There are two plausible explanations:

  • Search engines are so good at judging relevancy that they almost always place the best hit on top.
  • Users click the top hit not because it's any better, but simply because it's first. This might be due to sheer laziness (after all, you start from the top) or because users assume the search engine places the best hit on top, whether that's actually true or not.

As the study shows, the answer is clearly a little of both.

If users always clicked the best link, then swapping the order of the two links should also swap the percentages, and this didn't happen. The top hit still got the most clicks.

On the other hand, if users trusted the search engine implicitly and clicked the top link simply because it's first, then swapping the links shouldn't change the percentages. This didn't happen either. The top link's click rate dropped from 42% to 34%. In other words, 8% of users clicked elsewhere: 4% clicked the second hit (which was originally first) and 4% clicked other options.

The researchers also assessed whether the search engine in fact placed the best hit on top. Obviously, this can only be determined by having humans judge which websites are best, and there is no objective criterion for "best website to answer a given question." Still, the researchers derived relevancy ratings by averaging the judgments of five people, and that's probably as good as you get in terms of estimating information relevance.

On average, the top hit in the original search listings was judged as being the most relevant 36% of the time; the second hit was most relevant 24% of the time; and the two hits were equally relevant 40% of the time. In other words, the search engine tended to be right, but was wrong one-fourth of the time. (If the two hits are equally relevant, it doesn't matter which one is placed on top, so I counted these as being right.)

Given how often the search engine was wrong, users clicked the top hit far too frequently. And when the two top hits were swapped, too few users changed their behavior. In other words, we can conclude that there is a strong bias in favor of clicking the top link, though not so strong that link quality is irrelevant.

The actionable consequences for search engine marketing are not particularly surprising: It's extremely important to be listed first, to the extent that you can achieve this. But it's also important to have good microcontent to increase the likelihood that users will perceive your site as relevant to their needs. Good page titles and article summaries are a must.

Unfortunately, summaries are difficult to control in most search engines. This study was conducted on Google, which is notorious for displaying bad snippets that are difficult to read and not particularly descriptive of page content. At least you can do better in the internal search engine on your own website or intranet — assuming you can persuade your content contributors to produce good abstracts.

Default Values Beyond Search

Users rely on defaults in many other areas of user interface design. For example, they rarely utilize fancy customization features, making it important to optimize the default user experience, since that's what most users stick to.

In forms and applications, pre-populate fields with the most common value if you can determine it in advance. For example, on the registration form for our UX Conference, if people register for an event in New York, the country field will say "United States" by default, but if they register for London, it will say "United Kingdom." Obviously, many people come from other countries, and they'll have to change this entry to specify their own country — but they'd have to specify it anyway if we'd left it blank. By choosing the most common country as the default, we save many users that bit of work.

Defaults make two essential contributions to usability:

  • By showing a representative value, they serve as just-in-time instructions to help users understand how to complete a field.
  • By showing a frequent value, they help users understand the commonly expected response, as opposed to more atypical ones. You can use this knowledge for sales purposes — for example, by pre-selecting the one-year option in a subscription interface that also offers monthly payments. But, if you consistently pick the most expensive option as the default, you'll lose credibility, so don't overdo it.

By educating and guiding users, default values help reduce errors. It's therefore important to select helpful defaults, rather than those based on the first letter of the alphabet or whatever the first option on your original list happened to be.

Reference

Thorsten Joachims, Laura Granka, Bing Pan, Helene Hembrooke, and Geri Gay, " Accurately Interpreting Clickthrough Data as Implicit Feedback," Proceedings of the Conference on Research and Development in Information Retrieval (SIGIR), 2005. (Warning 1: link leads to a PDF file. Warning 2: it's an academic paper.)