Prioritizing work into a roadmap can be daunting for UX practitioners. Prioritization methods base these important decisions on objective, relevant criteria instead of subjective opinions.

This article outlines 5 methods for prioritizing work into a UX roadmap:

  1. Impact–effort matrix 
  2. Feasibility, desirability, and viability scorecard
  3. RICE method
  4. MoSCoW analysis 
  5. Kano model

These prioritization methods can be used to prioritize a variety of “items,” ranging from research questions, user segments, and features to ideas, and tasks. This article focuses on using these methods within the context of roadmapping—prioritizing problems that need to be solved into a strategic timeline. 

1. Impact–Effort Matrix

1.A. Overview

An impact–effort matrix is a 2D-visual that plots relative user value against implementation complexity. Variations of this matrix are used across various product-development approaches, including Six Sigma, design thinking, and Agile.

Plotting items on an impact-effort matrix help us assign items to one of four quadrants.
An impact–effort matrix assigns items to one of four quadrants: quick wins, big bets, fill-ins, and money pits.

The resulting matrix captures the relative effort necessary to implement candidate features and their impact on the users. It can be subdivided into four quadrants: 

  1. Quick wins include low-effort, high-impact items that are worth pursuing. 
  2. Big bets include high-effort, high-value items; they should be carefully planned and prototyped, and, if executed, are likely to be differentiators against competitors. 
  3. Money pit includes low-impact, high-effort items that are not worth the business investment; there are better places to spend time and resources. 
  4. Fill-ins comprise low-effort, low-impact items that may be easy to implement but may not be worth the effort as their value is minimal. 

A comparative matrix is a malleable tool. While we discuss impact–effort matrices in this article, you can easily replace each axis with other criteria or use multiple matrices to assess more than two criteria. When setting up multiple matrices, set up your axes so that the Quick Wins (or whatever the equivalent best-outcome quadrant is) is positioned in the same spot (for example, always in the bottom left position), in order to easily compare several matrices and identify the items that consistently fall in best-outcome quadrant. 

1.B. Criteria

This prioritization method uses two primary criteria to rank features that are considered for implementation: the impact that the feature will have on the end user and the effort required to implement that feature. 

  • Impact is the value the item will bring to the end user. The level of impact an item will have on end users depends on the users’ need, their alternatives, and the severity of the pain point the item solves.
  • Effort is the amount of labor and resources required to solve the problem. The more technically complex the item, the higher effort it will require.

1.C. Process

Items are gathered on a whiteboard and their relative scores on the impact and effort dimensions are established through voting. Team members are given colored dots (one color per dimension) to vote for those items that they consider to rate highly on one or both dimensions.  

A general rule of thumb is that the number of votes per person is half the number of items being prioritized. It’s also possible that certain team members vote on a single dimension, according to their expertise — for example, UX professionals may rank impact, while developers may rank implementation effort.

The result of each team member voting is a heat map.
As a first step, team members assign a vote to those items they believe to rank highest within their domain of expertise. For example, developers may have yellow dots and rank effort, while designers may have orange dots that represent impact on the user.

After team members have silently voted on items, the items can be placed collaboratively on an effort–impact matrix (the x-axis represents effort, while the y-axis represents impact) according to the number of impact and effort votes received. 

Once everything is placed onto the chart, discuss the results and compare items, prioritizing those in the quick-wins and big-bets quadrants. Feel free to use the artifact as a platform for negotiation — throughout discussion with the team, it’s okay to collaboratively move items. However, at the end, there should be agreement on the final placement and the artifact should be documented and saved so it can easily be referenced in the future. 

1.D. Best for Quick, Collaborative Prioritization

An impact–effort matrix is best suited for quick, collaborative prioritizations. The method has a few advantages:

  • The output is a shared visual that aligns mental models and builds common ground
  • It is democratic — each person can express their own opinion through a vote.
  • It can be done relatively quickly due to its simplicity. 

2. Feasibility, Desirability, and Viability Scorecard 

2.A. Overview

This method was developed by IDEO in the early 2000s. It ranks items based on a sum of individual scores across three criteria: feasibility, desirability, and viability. 

A table with items in each row and the criteria in each column. Totals are calculated for each item.
Create a table where items’ individual scores can be documented and added for a total score. Total scores are then compared, discussed, and reorganized to determine the final prioritization. The items with the highest overall scores best satisfy the prioritization criteria (in this case, desirability, feasibility, and viability). 

2.B. Criteria 

This prioritization method uses three criteria to rank items (i.e., features to be implemented):

  • Feasibility: the degree to which the item can be technically built. Does the skillset and expertise exist to create this solution?
  • Desirability: how much users want the item. What unique value proposition does it provide? Is the solution fundamentally needed, or are users otherwise able to accomplish their goals? 
  • Viability: if the item is functionally attainable for the business.  Does pursuing the item benefit the business? What are the costs to the business and is the solution sustainable over time? 

2.C. Process

Create a table, with one row for each possible item, and columns for the 3 criteria — feasibility, desirability, and viability. Then, determine a numeric scoring scale for each criterion. In the example above, we used a numeric scale from 1 to 10, with 1 being a low score. 

Next, give each item a score across each criterion. Scoring should be as informed as possible — aim to include team members who have complementary expertise. Once each item is scored across each criterion, calculate its total score and force a rank. Sort the table from highest to lowest total score, then discuss the results with your team. 

2.D. Best for Customized Criteria 

This scorecard format is highly customizable. You can add columns to reflect criteria specific to your organization’s context and goals. You can also replace the criteria with others relevant to you. For example, the NUF Test, created by Dave Gray, uses the same scorecard format, but with New, Useful, Feasible as the criteria set. 

Another common modification is assigning weights to the different criteria — with those that are very important weighing more heavily in the final score. 

3. RICE Method

3.A. Overview

RICE is a prioritization framework developed by Intercom. It takes into account four factors: reach, impact, confidence, and effort to prioritize which features to implement.

The RICE method stands for reach, impact, confidence, and effort.
The RICE Method ranks items by multiplying Reach (the number of users the item affects) by Impact (the result the item has on users) and Confidence (how much validation you have for your estimates). This resulting number is divided by Effort (the amount of work it will take to implement the item) to obtain an item’s final score. 

3.B. Criteria 

This RICE method is based on scoring each item on 4 different dimensions:

  • Reach: the number of users the item affects within a given time period 
  • Impact: the value added to users 
  • Confidence: how confident you are in your estimates of the other criteria (for example, highly confident if multiple data sources support your evaluation) 
  • Effort: the amount of work necessary to implement the item 

3.C. Process

Using the RICE method is straightforward. Separate scores are assigned for each criterion, then an overall score is calculated. 

  • A reach score is often estimated by looking at the number of users per time period (e.g., week, year);  ideally, this number is pulled from digital analytics or frequency metrics
  • The impact score should reflect how much the item will increase delight or alleviate friction; it is hard to precisely calculate, and, thus, it’s usually assigned a score (for example, through voting, like in the previous methods) often on a scale from .25 (low) to 3 (high).  
  • The confidence score is a percentage that represents how much you and your team trust the reach and impact scores.  100% represents high confidence, while 25% represents wild guesses. 
  • The effort score is calculated as “person-months” — the amount of time it will take all team members to complete the item. For example, an item is 6 person-months if it would require 3 months of work from a designer and 1 month from 3 separate developers.  

Once you have each of the 4 criterion scores, use the formula to calculate the final score for each item: multiply the reach, impact, and confidence scores and divide the result by the effort score. Then compare, discuss, and reevaluate all the items’ scores with your team.  

3.D. Best for Technical-Oriented Teams

The RICE method works well for organizations that are more technical in nature (for example, when stakeholders are comfortable with equations or spreadsheets). The RICE method also works well when there are many items that need to be prioritized. Consider including peers with diverse domains of expertise in the RICE process and assign them the task of calculating the score for the criterion that relates to their expertise. 

4. MoSCoW Analysis

4.A. Overview

MoSCoW analysis is a method for clustering items into four primary groups: Must Have, Should Have, Could Have, and Will Not Have. It was created by Dai Clegg and is used in many Agile frameworks. 

MoSCoW uses 4 categories (Must Have, Should Have, Could Have, and Will Not Have) to group and prioritize items.
MoSCoW analysis groups items into four groups: Must have (items that are essential to the project), Should have (items that are very important, but not essential), Could have (items that are nice to have), and Will not have (items that aren’t needed). 

4.B. Criteria

This prioritization approach groups items into four buckets: 

  • Must have: items that are vital to the product or project. Think of these as required for anything else to happen. If these items aren’t delivered, there is no point in delivering the solution at all. Without them the product won’t work, a law will be broken, or the project becomes useless. 
  • Should have: items that are important to the project or context, but not absolutely mandatory. These items support core functionality (that will be painful to leave out), but the project or product will still work without them. 
  • Could haves: items that are not essential, but wanted and nice to have. They have a small impact if left out. 
  • Will not have: items that are not needed. They don’t present enough value and can be deprioritized or dropped. 

4.C. Process

MoSCoW analysis can be applied to an entire project (start to finish) or to a project increment (a sprint or specific time horizon). 

Begin by identifying the scope you are prioritizing items for. If your goal is to create a UX roadmap, you’ll usually have to prioritize for the first three time horizons: now (work occurring in the next 2 months), next (work occurring in the next 6 months), and future (work occurring in the next year). 

Compile the items being prioritized and give each team member 3 weighted voting dots, (one dot with a 1 on it, the next with a 2 on it, and so forth). Ask team members to assign their dots to the items they believe most important, with 3 being weighed most heavily.

Each team member places weighted votes, resulting in scores for each item.
Team members vote on the items that they believe are Must Have for the roadmap time horizon. In the example above, each team member is given three voting dots, one with a 1, one with a 2, and one with a 3. They place their dots on the three items they believe should have the highest priority (with 3 being the item of highest priority among the 3). 

Add up each item’s score based on the ranked votes (3 = 3 points and so forth). Identify the items with the highest scores and make sure that everybody in the group agrees on their importance. 

As each item is discussed and agreed upon as a Must Have, move it to a new dedicated space. Repeat this process for lower-priority items and assign them to the Should Have, Could Have, and Will Not Have groups based on their scores.

Once you have assigned each item to one of the four groups, establish the resources and bandwidth required for each group, starting with the Must Haves. Keep track of the total bandwidth and resources at your disposal, distributing and allocating your total amount across Must Haves (which should get the most resources), Should Haves (with the second most resources), and finally Could Haves (with few resources).  

There is not a clear threshold for how many items should be in each group. To determine this number, return to the goal of the prioritization activity. For example, if you are prioritizing items in a backlog, there is only time for so many tasks to be achieved in one sprint. In this scenario, all Must Haves should be easily achieved within one sprint; this constraint will limit how many items cannot be placed within this group.  

Items with top votes should be placed in a Must Have category.
Once team members have voted, calculate a score for each item and group them into Must Haves, Should Haves, Could Haves, and Will Not Haves. Make sure to discuss as a group whether the resulting hierarchy makes sense to all involved. 

4.D. Best for Teams with Clear Time Boxes

MoSCoW is a good prioritization method for teams looking for a simplified approach (given the relatively vague prioritization criteria set) and with a clear time box identified for the work. Without a clearly scoped timeline for completing the work,  teams run the risk of overloading the Must Haves (of course, everything will feel like a Must Have if the timeline is the next two years!). 

5. Kano Model

5.A. Overview

The Kano model was published by Dr. Noriaki Kano in 1984 and is a primary prioritization method in the Six Sigma framework. Items are grouped into four categories according to user satisfaction and functionality and plotted on a 2D graph. 

Kano model is a graph with 4 trajectories based on functionality and customer satisfaction.
The Kano model scores items based on satisfaction and functionality. Using these scores, items can be clustered into 4 groups: Attractive, Performance, Indifferent, and Must be

5.B. Criteria 

This prioritization method uses two primary criterions to rank items: functionality and satisfaction. 

  • Functionality represents the degree to which the item can be implemented by the company. It can have 5 possible values ranging from -2 to 2:
    • None (-2): the solution cannot be implemented
    • Some (-1): the solution can be partly implemented
    • Basic (0): the solution’s primary functions can be implemented, but nothing more 
    • Good (1): the solution can be implemented to an acceptable degree
    • Best (2): the solution can be implemented to its full potential 
  • Customer satisfaction for each item is also assessed on a spectrum from -2 to 2:
    • Frustrated (-2): the solution causes additional hardship for the user
    • Dissatisfied (-1): the solution does not meet users’ expectations
    • Neutral (0) 
    • Satisfied (1): the solution meets users’ expectations
    • Delighted (2): the solution exceeds users’ expectations

5.C. Process

Each item is first assigned a satisfaction score and a functionality score. The satisfaction score should be based on user data — for example, on existing user research or on a top-task user survey asking users to rate the importance of each feature; the functionality score can be rooted in the collective expertise of the team.  

These scores are then used to plot items onto a 2D-graph, with the x-axis corresponding to functionality and the y-axis to satisfaction. Each axis goes from -2 to 2. 

Each score maps back to a Kano category.
Assigning different score combinations to the 4 quadrants in the Kano model

Based on their placement on their scores, items fall into one of four categories: 

  1. The Attractive category (often called Excitement) are items that are likely to bring a considerable increase in user delight. A characteristic of this category is the disproportionate increase in satisfaction to functionality. Your users may not even notice their absence (because they weren’t expectations in the first place), but with good-enough implementation, user excitement can grow exponentially. The items in the Attractive are those with a satisfaction score of 0 or better. These items appear above the blue Attractive line in the Kano illustration above.
  2. The Performance category contains items that are utilitarian. Unlike other categories, this group grows proportionately. The more you invest in items within this category, the more customer satisfaction they are likely to prompt. The items in the Performance category have equal satisfaction and performance scores and fall on the green line in the Kano illustration above.  
  3. The Indifferent category contains items that users feel neutral towards — satisfaction does not significantly increase or decrease with their functionality and is always 0. Regardless of the amount of investment put into these items, users won’t care. These items are all placed on the dark blue Indifference line (which overlaps with the x-axis). 
  4. The Must-be category are basic items that are expected by users. Users assume these capabilities exist. They are unlikely to make customers more satisfied, but without them, customers will be disproportionately dissatisfied. Items fall into the Must-be category when their satisfaction score is 0 or worse. These are the items in the purple area of the Kano diagram, below the purple Must Be line.

Once items are assigned to groups, make sure that everybody in the team agrees with the assignment. Items with scores of (0,0), (-2,0) and (+2,0) may initially belong to two groups. In these cases, discuss the item and ask yourself if user value will grow proportionately with your team’s investment. If the answer is yes, group the item with Performance. In cases this is false, group the item with Indifferent

Move items as needed, then prioritize items into your roadmap. Items in the Performance category should have the highest priority, followed by Must be, Attractive, then Indifferent

5.D. Best for Forcing a User-Centric Prioritization 

The Kano model is a good approach for teams who have a hard time prioritizing based on the user — often due to politics or a traditional development-driven culture. The Kano model introduces user research directly into the prioritization process and mandates discussion around user expectations.  

Conclusion

There are many more prioritization methods, aside from the five mentioned in this article. (It’s also easy to imagine variations on these 5.) One method is not better than another. Consider your project’s context, team culture, and success criteria when choosing a prioritization approach. 

Once you find an approach that works, don’t be afraid to iterate — adjust and adapt it to fit to your needs or appeal to your team. Involve others in this process. The best prioritization methods are ones that everyone on your team, including stakeholders, buy into. 

 

References

McBride, S. (2018). RICE: Simple prioritization for product managers. Intercom. https://www.intercom.com/blog/rice-simple-prioritization-for-product-managers/

What is the Kano Model? ProductPlan. https://www.productplan.com/glossary/kano-model/