Do not solve the problem that’s asked of you. It’s almost always the wrong problem.” – Don Norman

Designers have often been told to focus on outcomes, not features, so that they solve the right problem instead of building the wrong thing. While this rule has been accepted and practiced within UX design and product planning, it is too often forgotten when it comes to digital analytics. Just as building a feature to address the wrong problem will surely fail, tracking the wrong metric will prove meaningless.

Always Ask “Why?”

When determining what to measure on an ongoing basis, it is imperative to fully understand the ultimate goal of the design to be sure you choose an appropriate metric. If you don’t clearly define the goal, it is impossible to determine the best method to track performance or user experience.

Just as we should ask “Why?” when someone suggests a new design feature, we need to question and probe more deeply before deciding to track or report on a certain metric. What do we really want to know? What action could we take if we knew that number or rate? So much time is wasted on merely reporting metrics. There are countless things to track, and, without honing in on the true goal of measurement, no metric is likely to provide a meaningful or actionable insight.

For example, if a video has been added to a page, a natural event that we may think of tracking is each time the video is played. Before simply reporting this stat, we should ask: Why do we want to track when the video is played? Because we want to be sure it is noticeable on the page. Why? Because we want people to watch and learn from the video content. Why? Because we think it will convince people to contact us for a quote. Aha! (Those of you familiar with ideation techniques will recognize this as the “toddler” strategy — a useful method for digging down to root causes.) So now, rather than only tracking the number of video plays, we know that the useful metrics are:

  • the percentage of unique plays given the number of unique page views (to determine whether a satisfactory proportion of page visitors discover and play the video, disregarding any repeat page views or repeated playing of the video)
  • the percentage of video watchers who then go on to complete the contact form, and how it compares with the conversion rate of those who don’t watch the video (to determine whether the video content is persuasive and effective)

Analyzing this information is a much more meaningful evaluation of the video’s effectiveness than simply counting the number of plays, and is something that would easily be overlooked if no one had stopped to ask why.

Define Goals, Then Decide on Metrics

A common mistake when deciding what to measure is to define the goal in terms of a metric. The real answer to why is never to simply see the data. Never track metrics merely for the sake of measuring something. Analytics data by itself is not noteworthy, but rather is the means of getting insight into the current user experience.

While determining what to measure for a task or feature, don’t get bogged down by the knowledge of which metrics exist in your analytics tool or what you think is possible to measure. Think top-down, from goals to metrics, instead of bottom-up, from metrics to insights.  A good way to approach this process is to go sequentially through the following steps:

  1. Ask: what is the goal of the feature or design element in question? What do you want to ensure about the user experience of that feature or functionality? Only once this big-picture goal is defined should you look for concrete metrics that relate to that goal.

    For example, it is not a good goal to aim to reduce the bounce rate for a page (or set of pages). Why do you care about the bounce rate? What experience do you aim to provide for users landing on that page? The true goal is probably to ensure that the content is engaging and helpful, and that the right people reach the right pages on your site from search engines.

  2. Once this true goal has been identified, determine user behaviors that act as a signal for that goal. For example, whether people engage with the content, or whether they reached an appropriate page and if they match your target audience. These signals need not specify a method of measurement, but should instead be behavioral cues that would tell an observer whether the goal has been reached.

    For our bounce rate example, if you could observe a user during a usability test, what will you look for to tell if she engages with the content? Perhaps she interacted with a widget within the page, or zoomed in on a photo, or read to the bottom of the page before deciding to click back to Google.

    Or, if you’re interested in building loyalty by providing people with fast answers in a one-page visit, the behavioral cue might be whether users who bounce ever come back to your site for more answers in the future.

  3. Depending on the number of signals you identify, you may narrow the list to only the strongest signal(s) to simplify your measurement plan and avoid splitting your focus between too many alternate ways of identifying the same behavior.

  4. Define ways of measuring the signals in step 3. For example:

    • use a scroll heatmap to gauge how far down the page most people scroll and presumably read
    • use event tracking to count interactions with in-page widgets or photo elements
    • look at the referral sources or search keywords that brought people to that landing page — are those appropriate given the content and purpose of that page?
    • track the rate of return visits, even specifically returns from those who have bounced, or look at the overall frequency and recency of your audience to assess loyalty

These more specific metrics will get you much closer to evaluating the true performance of the UI than a surface-level metric chosen without fully reflecting on the deeper goal.

Track the User Experience

Don’t lose sight of the target of many redesigns and new features: to make a task easier, or help save people’s time. These user-experience goals should remain at the forefront when choosing how to measure the success of a design.

If a new timesaving feature is added to an interface, such as a bulk edit or bulk-upload function within a form or process, then the true success of the feature is tied to whether it actually saves people’s time! So, rather than simply counting how often the new feature is used or how many users have adopted it, focus on whether there is any timesaving benefit for users. Has the time on task decreased? Have the completion rates increased, signaling that the form is easier to submit? Do people make fewer errors compared to when they input items individually? Perhaps this redesign has increased engagement and overall usage because the task is no longer as tedious as it once was!

When calculating returns to report on the usability ROI of a new design, these performance-focused metrics will show insight into how the design benefits users. Emphasis on these measures will, in turn, create more loyalty and drive more business.

Conclusion

Focusing on the high-level, user-experience goals rather than jumping directly to what may be simple to track is critical to meaningful measurement. If these deeper goals are not considered during metric selection and data reporting, significant statistics will be overlooked. Good design is not about getting more clicks or more page views, so why would we measure success that way?

Learn more about identifying meaningful metrics to track user-experience goals in our full-day training course on Analytics and User Experience.