I recently found myself making two purchases after clicking through advertisements on Google. Given that I've been saying for four years that advertising on the Web doesn't work, this might seem like a contradiction.

However, I have always maintained that search engines are the exception to the Web advertising rule. There are two reasons for this:

  • Search engines are the only type of site that users visit with the explicit intention of finding somewhere else to go as quickly as possible.
  • Because they know what users are looking for, search engines can target ads to a user's current navigation goals. Displaying an ad for something that the user immediately wants is much more powerful than targeting ads based on general user profiling and demographics.

Google's ads are even better: Because the ads are text-only, there's a chance that users will actually see them, rather than ignore them due to banner blindness. Also, because the text is short, Google ads force a focus on crisp and plain content that might actually communicate something in the second or two users typically allocate to reading ads.

I've recently been running ads for one of our own projects, and the click-through statistics provide interesting insights into advertising design. With a few simple changes, our ads' performance improvements ranged from 55% to 310%, which is within the improvement range we typically see for usability-driven redesigns.

Keyword Targeting

Search engine advertising is keyword driven: You buy keywords and when users enter those words your ad appears. Buying the right keywords is thus a crucial step. It's easy to rapidly blow a budget on poorly targeted words.

For example, I bought the keywords "requirements specification" to run an advertisement for our field studies tutorial. Even after I tried several different wordings on the ad itself (two appear below), the click-through rate was exactly zero in the United States, although we did get a few clicks from other countries.

Practical Field Studies
Full-day course: fast method to get
user requirements from site visits.

www.NNgroup.com
Interest:
USA: 0.00%
   
Practical Field Studies
Base your design on the features
users really need. Full-day course.

www.NNgroup.com
Interest:
USA: 0.00%

I finally had to acknowledge the real problem: However much I believe that field studies of real users in their natural habitat are a great way to gather requirement specifications, if people into requirement specifications disagree, I'm not going to convince them otherwise in three lines.

All mediums have their benefits. The Web is not a great platform for changing people's minds. Typically, users are clicking so quickly on the things they want that they don't consider alternatives.

It's thus important to understand the context of your product. People typically have strong opinions about requirements engineering. When I have them safely in their seats and can spend an hour or more wowing them with data and striking examples, I'm more likely to convince them to base feature designs on user data. But in three lines, I'm not going to change the way people run projects. It's foolish to try.

Tune the Text

Selecting the right wording for your ad is crucial to click-through. The following examples show the initial and revised wordings of an ad we ran in Europe, along with the click-through rates (we ran a different ad for the American conference). The results? Users were 55% more likely to click on the revised ad.

Usability Conference
The leading international experts
in user experience visit London

www.NNgroup.com/events
Interest:
Europe: 0.85%
   
Jakob Nielsen in Europe
4-day usability conference
London, Nov. 4-7, 2001

www.NNgroup.com/events
Interest:
Europe: 1.32%

Our original ad (on the left) was fairly generic and did not use trust-enhancing design. Most conferences boast "leading international experts" (says who?) and, in any case, Web users are too busy to bother with your general statements about how good you are. The revised ad is more specific: it names one of the speakers and provides concrete facts about the conference.

Geographic Targeting

We know from many studies of international usability that people use the Web differently in different countries. Thus, it comes as no big surprise that the click-through rates can differ dramatically between regions.

For example, my initial ad for the conference's accessibility day (below, on the left) generated a horribly low click-through rate in Europe, though it did reasonably well in the U.S. When I revised it to target Europeans (right), the ad more than quadrupled the click-through rate. Well worth a few minutes of additional writing.

Usability beyond ALT text
Full-day seminar in US & Europe:
Design for users with disabilities

www.NNgroup.com/events
Interest:
Europe: 0.40%
USA: 0.96%
   
Usability beyond ALT text
Full-day seminar London Nov. 7:
Design for users with disabilities.

www.NNgroup.com/events
Interest:
Europe: 1.64%

Why didn't Europeans click on the first ad? I didn't run it through user testing, so I don't know for sure. However, my speculation is that the initial ad might have seemed too American. Although our accessibility seminar will present research from international studies of users with disabilities, the ad surely doesn't say that. And, while there's no room in the ad for that level of detail (and if there were, people wouldn't read it), retargetting the ad specifically to regional users definitely improved click-through.

Rapid Experimentation

With online ads, you can experiment rapidly. With the field studies ad appearing for "requirement specification" keywords, I pulled the ad after only 355 impressions. If the ad were going to attract a 1% click-through rate in the long run, then there would be a less than 3% probability of experiencing 355 exposures without a single click. This is more than enough proof. Of course, there is a 3% chance of canning an ad that might ultimately work, but so what? Better to try something else rather than continue an experiment that would be a waste of money in 97% of the cases.

I'm usually opposed to the strategy of simply throwing a design at the wall to see if it sticks. For interaction design projects, it is much better to test a prototype design with a few users before implementing and launching the full design.

Ads are different for several reasons:

  • There is a clear-cut metric for whether an ad works: the click-through rate. If it's less than your target (say, 1%), kill the ad. Doesn't matter why the ad doesn't work, it's toast. In contrast, with a full user interface, such as a website, there is substantial complexity. To improve its design, you need more detailed insights beyond "the site is too hard to use."
  • It is very cheap to generate alternative ads, especially for text-only ads that you can write in a few minutes. It's better to write more versions and see which one works than to agonize forever over a single ad.
  • It's also cheap to run a few hundred exposures to test an ad's click-through rate. The 355 exposures of the field study ad cost $5, which is cheaper than having a staff member even think about whether to run the ad or not. And, if you're willing to suffer a 10% probability of being wrong, you can differentiate between a 1% and 0% click-through rate in 229 exposures. With a few thousand exposures (costing less than $50), you can make much more accurate assessments. (Update 2004: I wrote this column back when search engines still charged for exposure. Now that you pay for performance, you pay almost nothing for those ads that turned out not to work, making experimentation even more cost-effective.)
  • No harm is done if you run a few ads that don't work. Nobody clicked, but also nobody got burned by your site or got a bad impression of your company. (This obviously assumes that you are not running offensive ads; just ads that people ignored.)

Finally, it's not literally true that click-through is all that matters. You also have to account for the conversion rate of visitors into paying customers. Some ads might attract lots of clicks from a type of user unwilling to pay for your product. Ideally, you should track clicks through to the point of sale.

Whether you track click-through or sales, the point remains: You can try out many different ads and many different keyword purchases, and simply keep the ones that work.

This same method applies to a few other areas where there is a similar binary distinction between designs that work and designs that fail. One example is having site visitors sign up for a newsletter: either they do it, or they don't. The outcome is clearly defined and measurable at the time of use. As with ads, you can easily try two alternative designs and keep the one that attracts the most subscribers.

Although finding what works and what doesn't can be easy with these types of design, guessing why users don't behave the way you want is less straightforward. This complicates the choice of alternative designs, because you don't get the qualitative insights that come from usability methods that go beyond the simple numbers. For more complex design problems, simply looking at clicks is not enough.

Find out more about designing search ads from this good book on how to use Google AdWords (same principles apply for Yahoo/Microsoft ads and other text-based search advertising): Buy from Amazon.com or, if you are based in Europe buy from Amazon.co.uk.