Prioritizing work into a roadmap can be daunting for UX practitioners. Prioritization methods base these important decisions on objective, relevant criteria instead of subjective opinions.
This article outlines 5 methods for prioritizing work into a UX roadmap:
- Impact–effort matrix
- Feasibility, desirability, and viability scorecard
- RICE method
- MoSCoW analysis
- Kano model
These prioritization methods can be used to prioritize a variety of “items,” ranging from research questions, user segments, and features to ideas, and tasks. This article focuses on using these methods within the context of roadmapping—prioritizing problems that need to be solved into a strategic timeline.
1. Impact–Effort Matrix
1.A. Overview
An impact–effort matrix is a 2D-visual that plots relative user value against implementation complexity. Variations of this matrix are used across various product-development approaches, including Six Sigma, design thinking, and Agile.
The resulting matrix captures the relative effort necessary to implement candidate features and their impact on the users. It can be subdivided into four quadrants:
- Quick wins include low-effort, high-impact items that are worth pursuing.
- Big bets include high-effort, high-value items; they should be carefully planned and prototyped, and, if executed, are likely to be differentiators against competitors.
- Money pit includes low-impact, high-effort items that are not worth the business investment; there are better places to spend time and resources.
- Fill-ins comprise low-effort, low-impact items that may be easy to implement but may not be worth the effort as their value is minimal.
A comparative matrix is a malleable tool. While we discuss impact–effort matrices in this article, you can easily replace each axis with other criteria or use multiple matrices to assess more than two criteria. When setting up multiple matrices, set up your axes so that the Quick Wins (or whatever the equivalent best-outcome quadrant is) is positioned in the same spot (for example, always in the bottom left position), in order to easily compare several matrices and identify the items that consistently fall in best-outcome quadrant.
1.B. Criteria
This prioritization method uses two primary criteria to rank features that are considered for implementation: the impact that the feature will have on the end user and the effort required to implement that feature.
- Impact is the value the item will bring to the end user. The level of impact an item will have on end users depends on the users’ need, their alternatives, and the severity of the pain point the item solves.
- Effort is the amount of labor and resources required to solve the problem. The more technically complex the item, the higher effort it will require.
1.C. Process
Items are gathered on a whiteboard and their relative scores on the impact and effort dimensions are established through voting. Team members are given colored dots (one color per dimension) to vote for those items that they consider to rate highly on one or both dimensions.
A general rule of thumb is that the number of votes per person is half the number of items being prioritized. It’s also possible that certain team members vote on a single dimension, according to their expertise — for example, UX professionals may rank impact, while developers may rank implementation effort.
After team members have silently voted on items, the items can be placed collaboratively on an effort–impact matrix (the x-axis represents effort, while the y-axis represents impact) according to the number of impact and effort votes received.
Once everything is placed onto the chart, discuss the results and compare items, prioritizing those in the quick-wins and big-bets quadrants. Feel free to use the artifact as a platform for negotiation — throughout discussion with the team, it’s okay to collaboratively move items. However, at the end, there should be agreement on the final placement and the artifact should be documented and saved so it can easily be referenced in the future.
1.D. Best for Quick, Collaborative Prioritization
An impact–effort matrix is best suited for quick, collaborative prioritizations. The method has a few advantages:
- The output is a shared visual that aligns mental models and builds common ground.
- It is democratic — each person can express their own opinion through a vote.
- It can be done relatively quickly due to its simplicity.
2. Feasibility, Desirability, and Viability Scorecard
2.A. Overview
This method was developed by IDEO in the early 2000s. It ranks items based on a sum of individual scores across three criteria: feasibility, desirability, and viability.
2.B. Criteria
This prioritization method uses three criteria to rank items (i.e., features to be implemented):
- Feasibility: the degree to which the item can be technically built. Does the skillset and expertise exist to create this solution?
- Desirability: how much users want the item. What unique value proposition does it provide? Is the solution fundamentally needed, or are users otherwise able to accomplish their goals?
- Viability: if the item is functionally attainable for the business. Does pursuing the item benefit the business? What are the costs to the business and is the solution sustainable over time?
2.C. Process
Create a table, with one row for each possible item, and columns for the 3 criteria — feasibility, desirability, and viability. Then, determine a numeric scoring scale for each criterion. In the example above, we used a numeric scale from 1 to 10, with 1 being a low score.
Next, give each item a score across each criterion. Scoring should be as informed as possible — aim to include team members who have complementary expertise. Once each item is scored across each criterion, calculate its total score and force a rank. Sort the table from highest to lowest total score, then discuss the results with your team.
2.D. Best for Customized Criteria
This scorecard format is highly customizable. You can add columns to reflect criteria specific to your organization’s context and goals. You can also replace the criteria with others relevant to you. For example, the NUF Test, created by Dave Gray, uses the same scorecard format, but with New, Useful, Feasible as the criteria set.
Another common modification is assigning weights to the different criteria — with those that are very important weighing more heavily in the final score.
3. RICE Method
3.A. Overview
RICE is a prioritization framework developed by Intercom. It takes into account four factors: reach, impact, confidence, and effort to prioritize which features to implement.
3.B. Criteria
This RICE method is based on scoring each item on 4 different dimensions:
- Reach: the number of users the item affects within a given time period
- Impact: the value added to users
- Confidence: how confident you are in your estimates of the other criteria (for example, highly confident if multiple data sources support your evaluation)
- Effort: the amount of work necessary to implement the item
3.C. Process
Using the RICE method is straightforward. Separate scores are assigned for each criterion, then an overall score is calculated.
- A reach score is often estimated by looking at the number of users per time period (e.g., week, year); ideally, this number is pulled from digital analytics or frequency metrics.
- The impact score should reflect how much the item will increase delight or alleviate friction; it is hard to precisely calculate, and, thus, it’s usually assigned a score (for example, through voting, like in the previous methods) often on a scale from .25 (low) to 3 (high).
- The confidence score is a percentage that represents how much you and your team trust the reach and impact scores. 100% represents high confidence, while 25% represents wild guesses.
- The effort score is calculated as “person-months” — the amount of time it will take all team members to complete the item. For example, an item is 6 person-months if it would require 3 months of work from a designer and 1 month from 3 separate developers.
Once you have each of the 4 criterion scores, use the formula to calculate the final score for each item: multiply the reach, impact, and confidence scores and divide the result by the effort score. Then compare, discuss, and reevaluate all the items’ scores with your team.
3.D. Best for Technical-Oriented Teams
The RICE method works well for organizations that are more technical in nature (for example, when stakeholders are comfortable with equations or spreadsheets). The RICE method also works well when there are many items that need to be prioritized. Consider including peers with diverse domains of expertise in the RICE process and assign them the task of calculating the score for the criterion that relates to their expertise.
4. MoSCoW Analysis
4.A. Overview
MoSCoW analysis is a method for clustering items into four primary groups: Must Have, Should Have, Could Have, and Will Not Have. It was created by Dai Clegg and is used in many Agile frameworks.
4.B. Criteria
This prioritization approach groups items into four buckets:
- Must have: items that are vital to the product or project. Think of these as required for anything else to happen. If these items aren’t delivered, there is no point in delivering the solution at all. Without them the product won’t work, a law will be broken, or the project becomes useless.
- Should have: items that are important to the project or context, but not absolutely mandatory. These items support core functionality (that will be painful to leave out), but the project or product will still work without them.
- Could haves: items that are not essential, but wanted and nice to have. They have a small impact if left out.
- Will not have: items that are not needed. They don’t present enough value and can be deprioritized or dropped.
4.C. Process
MoSCoW analysis can be applied to an entire project (start to finish) or to a project increment (a sprint or specific time horizon).
Begin by identifying the scope you are prioritizing items for. If your goal is to create a UX roadmap, you’ll usually have to prioritize for the first three time horizons: now (work occurring in the next 2 months), next (work occurring in the next 6 months), and future (work occurring in the next year).
Compile the items being prioritized and give each team member 3 weighted voting dots, (one dot with a 1 on it, the next with a 2 on it, and so forth). Ask team members to assign their dots to the items they believe most important, with 3 being weighed most heavily.
Add up each item’s score based on the ranked votes (3 = 3 points and so forth). Identify the items with the highest scores and make sure that everybody in the group agrees on their importance.
As each item is discussed and agreed upon as a Must Have, move it to a new dedicated space. Repeat this process for lower-priority items and assign them to the Should Have, Could Have, and Will Not Have groups based on their scores.
Once you have assigned each item to one of the four groups, establish the resources and bandwidth required for each group, starting with the Must Haves. Keep track of the total bandwidth and resources at your disposal, distributing and allocating your total amount across Must Haves (which should get the most resources), Should Haves (with the second most resources), and finally Could Haves (with few resources).
There is not a clear threshold for how many items should be in each group. To determine this number, return to the goal of the prioritization activity. For example, if you are prioritizing items in a backlog, there is only time for so many tasks to be achieved in one sprint. In this scenario, all Must Haves should be easily achieved within one sprint; this constraint will limit how many items cannot be placed within this group.
4.D. Best for Teams with Clear Time Boxes
MoSCoW is a good prioritization method for teams looking for a simplified approach (given the relatively vague prioritization criteria set) and with a clear time box identified for the work. Without a clearly scoped timeline for completing the work, teams run the risk of overloading the Must Haves (of course, everything will feel like a Must Have if the timeline is the next two years!).
5. Kano Model
5.A. Overview
The Kano model was published by Dr. Noriaki Kano in 1984 and is a primary prioritization method in the Six Sigma framework. Items are grouped into four categories according to user satisfaction and functionality and plotted on a 2D graph.
5.B. Criteria
This prioritization method uses two primary criterions to rank items: functionality and satisfaction.
- Functionality represents the degree to which the item can be implemented by the company. It can have 5 possible values ranging from -2 to 2:
- None (-2): the solution cannot be implemented
- Some (-1): the solution can be partly implemented
- Basic (0): the solution’s primary functions can be implemented, but nothing more
- Good (1): the solution can be implemented to an acceptable degree
- Best (2): the solution can be implemented to its full potential
- Customer satisfaction for each item is also assessed on a spectrum from -2 to 2:
- Frustrated (-2): the solution causes additional hardship for the user
- Dissatisfied (-1): the solution does not meet users’ expectations
- Neutral (0)
- Satisfied (1): the solution meets users’ expectations
- Delighted (2): the solution exceeds users’ expectations
5.C. Process
Each item is first assigned a satisfaction score and a functionality score. The satisfaction score should be based on user data — for example, on existing user research or on a top-task user survey asking users to rate the importance of each feature; the functionality score can be rooted in the collective expertise of the team.
These scores are then used to plot items onto a 2D-graph, with the x-axis corresponding to functionality and the y-axis to satisfaction. Each axis goes from -2 to 2.
Based on their placement on their scores, items fall into one of four categories:
- The Attractive category (often called Excitement) are items that are likely to bring a considerable increase in user delight. A characteristic of this category is the disproportionate increase in satisfaction to functionality. Your users may not even notice their absence (because they weren’t expectations in the first place), but with good-enough implementation, user excitement can grow exponentially. The items in the Attractive are those with a satisfaction score of 0 or better. These items appear above the blue Attractive line in the Kano illustration above.
- The Performance category contains items that are utilitarian. Unlike other categories, this group grows proportionately. The more you invest in items within this category, the more customer satisfaction they are likely to prompt. The items in the Performance category have equal satisfaction and performance scores and fall on the green line in the Kano illustration above.
- The Indifferent category contains items that users feel neutral towards — satisfaction does not significantly increase or decrease with their functionality and is always 0. Regardless of the amount of investment put into these items, users won’t care. These items are all placed on the dark blue Indifference line (which overlaps with the x-axis).
- The Must-be category are basic items that are expected by users. Users assume these capabilities exist. They are unlikely to make customers more satisfied, but without them, customers will be disproportionately dissatisfied. Items fall into the Must-be category when their satisfaction score is 0 or worse. These are the items in the purple area of the Kano diagram, below the purple Must Be line.
Once items are assigned to groups, make sure that everybody in the team agrees with the assignment. Items with scores of (0,0), (-2,0) and (+2,0) may initially belong to two groups. In these cases, discuss the item and ask yourself if user value will grow proportionately with your team’s investment. If the answer is yes, group the item with Performance. In cases this is false, group the item with Indifferent.
Move items as needed, then prioritize items into your roadmap. Items in the Performance category should have the highest priority, followed by Must be, Attractive, then Indifferent.
5.D. Best for Forcing a User-Centric Prioritization
The Kano model is a good approach for teams who have a hard time prioritizing based on the user — often due to politics or a traditional development-driven culture. The Kano model introduces user research directly into the prioritization process and mandates discussion around user expectations.
Conclusion
There are many more prioritization methods, aside from the five mentioned in this article. (It’s also easy to imagine variations on these 5.) One method is not better than another. Consider your project’s context, team culture, and success criteria when choosing a prioritization approach.
Once you find an approach that works, don’t be afraid to iterate — adjust and adapt it to fit to your needs or appeal to your team. Involve others in this process. The best prioritization methods are ones that everyone on your team, including stakeholders, buy into.
References
McBride, S. (2018). RICE: Simple prioritization for product managers. Intercom. https://www.intercom.com/blog/rice-simple-prioritization-for-product-managers/
What is the Kano Model? ProductPlan. https://www.productplan.com/glossary/kano-model/
Share this article: