In your journeys across the web, you probably have encountered the now ubiquitous question “How likely are you to recommend this website to a friend?” It is the question behind the net-promoter score, a customer-loyalty metric that is often used as a gauge of your user experience. The net promoter score is a customer-loyalty measure obtained from customers’ self-reported likelihood of recommending a service, product, or experience to friends or family.

Definition: The net promoter score (NPS) is a metric that quantifies how many more people are likely to strongly recommend your site or product compared to those likely to criticize it.

Calculation of NPS

NPS is computed by asking people to provide an answer, on a scale from 0–10, to the question: “How likely are you to recommend this website/product/service to a friend or relative?” The answers are then grouped into 3 categories:

  • Promoters: responses of 9 or 10, which indicate high satisfaction and strong likelihood of recommendation
  • Detractors: responses of 0 to 6, which indicate dissatisfaction and likely criticism
  • Passives: responses of 7 or 8, which indicate moderate satisfaction, but low likelihood of recommendation

The NPS is then calculated by subtracting the percentage of detractors from the percentage of promoters:

NPS Equation

Note that the passives are included in the total number of respondents, but do not contribute to the score otherwise. The rationale is that these users may feel that their needs are fulfilled, but will not actively promote the product or service with family or friends.

It may seem harsh to limit promoters to scores of 9 or better and to count a 6 as a detractor, even though it’s above the mathematical midpoint of the scale (5). However, these cutoff points are actually pretty reasonable because raters tend to be generous and give fairly high scores. For example, the following chart shows the average user satisfaction scores for 42 websites that we tested in 2016:

Web User Satisfaction Scores

You can see that the vast majority of websites were rated 7 or 8: this indicates that 7 is the expected level of satisfaction on the web today. Most sites are well designed, but that is not good enough: users become active promoters when the site not only meets their expectations, but also exceeds them.

From a mathematical standpoint, you might expect that the average website rating be 5. However, in our study, the actual average score was 6.97 for the 42 sites we tested. (These sites were from most major fields of business and had various levels of usability.) In other words, given users’ tendency to be generous with their ratings, 7 is the perceived midpoint on a 0–10 scale.

Interpretation of NPS

NPS can range from -100% (only detractors) to +100% (only promoters). A positive score indicates that the promoters outnumber detractors, while a negative score shows poor customer loyalty, with detractors outnumbering promoters. While that is the full range of possible outcomes, in practice the range tends to be more restricted: in a study that looked at 20 different software products, Jeff Sauro found that the NPS score ranged from -26% to 40%, with an average of 15%.

History

Frederick F. Reichheld, a business strategist and author of the bestseller The Loyalty Effect, first introduced the concept of NPS in Harvard Business Review in 2003. In that article, he described a 4,000-respondent survey in which he asked several questions and tracked how well the answers correlated with a number of different measurements, including repeat purchases and recommendations to friends or family. He found that the question that best predicted customer behavior was the net promoter score. NPS also strongly correlated with company growth over time.

Reichheld argued that the net promoter score is relevant because customer recommendations and word-of-mouth referrals are a direct driver of revenue growth in many businesses. As Reichheld put it, “When customers act as references, they do more than indicate that they’ve received good economic value from a company; they put their own reputations on the line. And they will risk their reputations only if they feel intense loyalty.” This intense loyalty ultimately saves money on marketing expenses and also raises profits over time.

How NPS Can Benefit a User-Experience Assessment

NPS is well-known and liked by upper management for its strong correlation to profits and for the sheer fact that it is a quantifiable measurement of something as nebulous as customer loyalty.

NPS is also fairly easy to collect: unlike other more complex instruments, the NPS is based on a single question, so users will be more likely to actually respond to that one question than to a lengthy survey.

As a result, it’s become customary to include the NPS in user interviews, surveys, or even usability-testing sessions. UX practitioners often use it as a tool to promote buy-in from their company’s senior leadership. Numbers that demonstrate increased customer loyalty (and consequently, future sales and profits) after investing in the UX process are more likely to sway skeptical managers and executives than qualitative data. Thus, NPS can be part of a series of metrics to measure how a redesign affected loyalty. By quantifying the site usability and loyalty before and after a redesign, companies can assess whether the redesign was worth it and whether it brought enough return on investment (ROI).

It’s been shown that NPS is closely related to the perception of user experience. In particular, scores on elaborate satisfaction questionnaires such as the System Usability Scale (SUS) are well-correlated with NPS. Thus, if most of your customers are reporting high loyalty to the point of putting their own reputations on the line to recommend your site, chances are that your site is also usable.

Limitations of NPS as a Usability Metric

1. NPS does not capture the full picture when used in isolation.

Usability is never entirely captured by subjective scores. We’ve seen many users struggling to complete a task, yet rating the site as highly as someone who had no difficulty whatsoever. To get a complete picture of the user experience, we recommend that you also collect performance metrics such as task success rates and task times.

NPS, like all quantitative metrics, tells you how your site is doing but not why. Asking customers to report the reason for their rating might help with this, but self-reporting is rarely reliable and the user might not bother investing the time to explain the rating altogether.

2. NPS is only relevant with a large enough sample size.

NPS scores (like any metrics) are rarely relevant with a small sample size. Running a qualitative user test with 5 users and asking for the NPS at the end is unlikely to provide you with any valid data, yet many practitioners insist upon reporting these measures with small samples and they base design decisions on them, disregarding their lack of statistical significance.

3. By “binning” responses, NPS oversimplifies the data and ignores the strength of respondents’ convictions.

While other satisfaction metrics also suffer from the disadvantages mentioned above, NPS has another major problem: its calculation method ignores a lot of the information provided by the ratings by grouping them into three bins (promoters, passives, and detractors) and then ignoring the passives. As a result, researchers must drastically increase their sample sizes in order to gain any statistically relevant information, and even then, researchers lose sight of the intensity of a respondent’s score.

For example, if we could change a design from getting mainly scores of 2 (truly hated) to getting mainly scores of 5 (somewhat disliked), we would have made a major UX improvement, but all of those users would still be counted as detractors in NPS terms, even though they had changed from being rabid detractors to being modest detractors.

Customer Satisfaction vs. User Satisfaction

NPS is best used to assess overall customer satisfaction with an entire company or service, or at least with an entire product. It makes less sense to utilize NPS to assess user satisfaction with lower-level details of UI design, such as a website’s checkout process, a product page, or a specific dialog box or icon. Yet these local design elements are often what we can actually change in everyday design projects. And these details sum to form the overall impression a customer has of a company.

There are many examples of how UX design decisions impact brand perception — for example, the specific location of a logo on the web page impacts whether users even remember the brand at all, or the tone of voice employed on the website changes users’ inclination to recommend the company behind the site.

That said, there are many more aspects to customer satisfaction and brand recommendations than the design of a website. Pricing is one obvious variable: if people feel that something is overpriced for what’s delivered, they are unlikely to recommend it, no matter how much they like the website’s design.

Let’s say that you improve the writing on some web pages or that you improve the usability of the icons in a mobile app. Don’t get too disappointed if the company’s overall NPS remains flat. It’s hard for a single localized design decision to move the needle much on overall customer loyalty. That’s why we need to supplement global NPS scores with lower-level measures of user satisfaction and task performance.

Conclusion

NPS is a strong indicator of customer loyalty and predicts revenue and company growth. It is simple to administer and understand, and is already well known in the business community. NPS correlates strongly with standard measures of user satisfaction such as SUS and can secure buy-in for usability from upper management. However, when used by itself, NPS, like any subjective metric, is fairly limited and far from being a good description of the overall user experience. But when combined with other UX metrics, NPS can help you track the usability of your site over time.

 

Reference

Frederick F. Reichheld. 2003. The One Number You Need to Grow. Harvard Business Review.