You can achieve a high-quality user interface by combining 3 design process models:

  1. Competitive testing
  2. Parallel design
  3. Iterative design

Although you should apply the methods in the above list's sequence, I'll discuss them in the opposite order here.

All 3 methods share one basic idea: there's no one perfect user interface design, and you can't get good usability by simply shipping your one best idea. You have to try (and test) multiple design ideas. Competitive, parallel, and iterative testing are simply 3 different ways to consider design alternatives. By combining them, you get wide diversity at a lower cost than simply sticking to a single approach.

Iterative Design

I start with iterative design here because it's the

  • simplest process model (a linear progression);
  • oldest foundation for user-centered design (UCD),
  • cheapest (you often can iterate in a few hours); and
  • strongest, because you can keep going for as many iterations as your budget allows (competitive and parallel testing are usually one-shot components of a design project).

I hardly need to define iterative design: I wrote a long paper about it 18 years ago, and it hasn't changed:

Iterative user interface design: conceptual process model

Iteration simply means to step through one design version after another. For each version, you conduct a usability evaluation (such as user testing or heuristic evaluation) and revise the next version based on these usability findings.

How Many Iterations?

I recommend at least 2 iterations. These 2 iterations correspond to 3 versions: the first draft design (which we know is never good enough), followed by 2 redesigns. But my preference is 5–10 iterations or more, particularly when iterating weekly (or even more often).

Of course, one iteration (that is, a single redesign, for a total of 2 design versions) is still better than shipping your best guess without usability-derived improvements. But experience shows that the first redesign will have many remaining usability problems, which is why it's best to plan for at least 2 iterations.

More iterations are better: I've never seen anyone iterate so much that there were no usability improvements to be had from the last iterations. In my research 18 years ago, measured usability improved by 38% per iteration. These metrics came from traditional application development; if we look at websites, improvements are typically bigger. In a newer case study, the targeted KPI improved by 233% across 6 iterations (7 design versions = 6 iterations between versions), corresponding to 22% per iteration. The key lesson from this latter case study is that it's best to keep iterating, because you can keep piling on the gains.

To get many iterations within a limited budget and timeline, you can use discount usability methods: create paper prototypes for the early design versions, planning about 1 day per iteration. In later versions, you can gradually proceed to higher-fidelity renderings of the user interface, but there's no reason to worry about fine details of the graphics in early stages, when you're likely to rip the entire workflow apart between versions.

Simple user testing (5 users or less) will suffice, because you'll conduct additional testing for later iterations.

Limitations of Iterative Design?

A classic critique of iterative design is that it might encourage hill-climbing toward a local maximum rather than discovering a superior solution in a completely different design space area.

My main comeback? "So what." The vast majority of user interface design projects are pedestrian attempts to make an e-commerce site, an employee directory, or some similar feature that has a massive number of well-documented best practices.

Yes, some design problems are radically new — say, the original design of the iPad or the Kinect. And some are semi-radical, such as designing a first-generation iPad app or Kinect game (after the platform itself had already been designed, but before the corresponding usability guidelines had been documented). But most design problems belong to categories with numerous existing designs.

Of course, superior solutions that exceed current best practice are possible; after all, we haven't seen the perfect user interface yet. But most designers would be happy to nearly double their business metrics. Simply polishing a design's usability through an iterative design has extremely high ROI, and is often preferable to the larger investment needed for higher gains.

Parallel Design

Although I remain a strong fan of iterative design, it's true that it limits us to improving a single solution. If you start out in the wrong part of the design space, you might not end up where you'd really like to go.

To avoid this problem, I prefer to start with a parallel design step before proceeding with iterative design, as this diagram shows:

Parallel user interface design: conceptual process model

In a parallel design process, you create multiple alternative designs at the same time. You can do this either by encouraging a single designer to really push their creativity or by assigning different design directions to different designers, each of whom makes one draft design.

In any case, to stay within a reasonable budget, all parallel versions should be created quickly and cheaply. They don't need to embody a complete design of all features and pages. Instead, for a website or intranet, you can design maybe 10 key pages and, for an application, you can design just the top features. Ideally, you should spend just a few days designing each version and refine them only to the level of rough wireframes.

Although you should create a minimum of 3 different design alternatives, it's not worth the effort to design many more. 5 is probably the maximum.

Once you have all the parallel versions, subject them to user testing. Each test participant can test 2 or 3 versions. Any more and users get jaded and can't articulate the differences. Of course, you should alternate which version they test first, because users are only fresh on their first attempt. When they try the second or third UI that solves the same problem, people inevitably transfer their experience from using the previous version(s). Still, it's worth having users try a few versions so that they can do a compare-and-contrast at the end of the session.

After user testing, create a single merged design, taking the best ideas from each of the parallel versions. The usability study is not a competition to identify "a winner" from the parallel designs. Each design always has some good parts and some that don't hold up to the harsh light of user testing.

Finally, proceed with iterative design (as above) to further refine the merged design.

15 years ago I conducted a research study of parallel design, in which we tried and evaluated 3 different alternatives:

  • Out of 4 parallel versions, simply pick the best one and iterate on it. This approach resulted in measured usability 56% higher than the average of the original 4 designs.
  • Follow the recommended process and use a merged design, instead of picking a winner. Here, measured usability was 70% higher, giving us an additional 14% gain from including the best ideas of the "losing" designs.
  • Continue iterating from the merged design. After one iteration, measured usability was 152% higher than the average of the original designs. (So, an extra iteration added 48% usability to the merged design — calculated as 2.52/1.70. This is within the expected range of gains from iterative design.)

Of course, there was no reason to stop with one iteration after the merged design; I simply ran out of budget. As always, I'd recommend at least 2–3 iterations.

My study was in the domain of traditional application development. In a recent study, Steven P. Dow and his colleagues from Stanford University took this approach to the domain of Internet advertising. For the Stanford study, a group of designers created banner advertisements for a social media site, aiming to optimize the click-through rate (CTR). Ads created through a parallel design process achieved 0.055% CTR, whereas ads created without parallel design achieved 0.033% CTR. So, parallel design performed 67% better. They recorded these scores over the first 5 days of the advertising campaign.

Over the full 15-day campaign, parallel-design ads scored 0.045% CTR compared with 0.040% CTR for nonparallel-design ads. Over this longer campaign, parallel design was only 12% better.

We've long known that people tend to screen out Web ads. This might imply that it's best to constantly launch new ads and run very short campaigns with each, though I'd like to see more data before forming a firm conclusion on this point.

So, even though the conclusions are less strong for ads than for apps, the bottom line is the same: parallel design generates better outcomes.

(Learn more about parallel design in our full-day Effective Ideation Techniques for UX Design course.)

Competitive Testing

In a competitive usability study, you test your own design and 3–4 other companies' designs. The process model looks the same as for parallel design, except that the original design alternatives are pre-existing sites or apps as opposed to wireframes you create specifically for the study.

The benefit of competitive testing is also the same as for parallel design: you gain insight into user behaviors with a broad range of design options before you commit to a design that you'll refine through iterative design.

Competitive testing is also advantageous in that you don't spend resources creating early design alternatives: you simply pick from among the ones available on the Web (assuming you're doing a website; competitive testing doesn't work for intranets and other domains where you can't easily get your hands on other companies' designs.)

Just as with parallel design, a competitive test shouldn't simply be a benchmark to anoint a "winner." Sure, it can get the competitive juices stewing in most companies to learn that a hated competitor scores, say, 45% higher on key usability metrics. Such numbers can spur executive action. But as always, quantitative measurements provide weaker insights than qualitative research. A more profitable goal for competitive studies is to understand why and how users behave in certain ways; learn what features they like or find confusing across a range of currently popular designs; and discover opportunities to serve unmet needs.

Many design teams skip competitive testing because of the added expense of testing several sites. (For example, Nielsen Norman Group currently charges $45,000 for most competitive testing and only $22,000 to test a single website. Of course, you can get cheaper tests from Tier-2 or Tier-3 usability firms, but they'll still charge more for bigger studies.) But this step is well worth the cost because it's the best way to gain deep insights into users' needs before you attempt to design something to address these needs.

Competitive testing is particularly important if you're using an Agile development methodology because you often won't have time for deeper explorations during individual sprints. You can do the competitive study before starting your development project because you're testing existing sites instead of new designs. You can later reach back to the insights when you need to make fast decisions during a sprint. Insights from pre-project competitive testing thus serve as money in the bank that you can withdraw when you're in a pinch.

Exploring Design Diversity Produces Better UX

All 3 methods — iterative, parallel, and competitive — work for the same reason: Instead of being limited to your one best idea, you try a range of designs and see which ones actually work with your customers in user testing.

The methods have different ways of helping you explore diverse designs and push you to move into different directions. This is important because there are so many dimensions in interaction design that the resulting design space is humongously vast.

In the ideal process, you'd first conduct competitive testing to get deep insights into user needs and behaviors with the class of functionality you're designing. Next, you'd proceed to parallel design to explore a wide range of solutions to this design problem. Finally, you'd go through many rounds of iterative design to polish your chosen solution to a high level of user experience quality. And, at each step, you should be sure to judge the designs based on empirical observations of real user behavior instead of your own preferences. (Repeat after me: "I am not the Audience." )

Combining these 3 methods prevents you from being stuck with your best idea and maximizes your chances of hitting on something better.