Design critiques are essential to an iterative creative process. Soliciting feedback from stakeholders and team members throughout a project allows designers to change the design before it’s final. Regular critiques during projects can result in:

  • High-quality designs
  • Alignment on project goals and design objectives
  • Direct answers to outstanding design questions
  • Early collaboration with team members and stakeholders

In a perfect world, design critiques run smoothly and result in immediately actionable feedback. But what happens when reality throws us a curve ball?

Dealing with Derailed Critiques

Even with a perfectly planned agenda, design critiques may get derailed. Here are 3 common contributors to derailed design critiques:

  • Hypothetical scenarios: Situations which are not supported by data or arguments presented for the sake of arguing (“If I could play devil’s advocate…”)
  • Unhelpful and unactionable feedback: Personal opinions about the design or about elements outside the designer’s control
  • Off-topic or premature questions: Questions that aren’t related to the design or are asked too early in the design process

When a design critique starts to go off track, don’t fret! There are several different ways to gain back control of the room and still receive valuable feedback.

When Team Members Play Devil’s Advocate

Picture this: you’re presenting a prototype to your development team. The prototype was based on requirements received from stakeholders, but now there are a lot of new scenarios thrown around and you’re worried that this design won’t be able to solve those problems. There’s also one person in the room who starts all sentences with “Not to play devil’s advocate, but…” What do you do?

A devil’s advocate is a person who presents a counter, opposing, or unpopular argument to test its validity, without actually being committed to it. This scenario is common in teams of all sizes. A study on defending design decisions from Carnegie Mellon University discusses how this type of argument relies on pseudoevidence — a script depicting how a scenario might occur.

Let’s look at an example of pseudoevidence:

A developer being introduced to a design in a critique says, “We should include a button for this feature in the main navigation because people won’t want to hunt for it.”

The statement is using evidence by illustration: the scenario brought up is hypothetical and hasn’t been tested with actual users. Saying “because” doesn’t make it data. The fact that it’s brought up isn’t bad, but it’s easy to get overwhelmed when several such untested scenarios suddenly need to be incorporated in your design.

It’s important to understand why these scenarios are brought up during critiques:

  • Team members are trying to make sure the design will account for many possible situations. Especially with the negative connotations of rework, developers and product managers want to make sure all scenarios are out in the open before starting work.
  • Personas and objectives are not well defined. When there isn’t a consensus on the targeted audience, it’s easy to think that anyone and everyone could interact with the design. Well-defined personas and project goals help narrow the focus.
  • People want their ideas heard. They want to be involved in the process and make sure that all relevant ideas are captured.

Tactic to Try: Gather All Scenarios

When several hypothetical scenarios are suggested, pause the critique and try a quick hands-on activity to document them:

  1. Give each person attending the critique 5 minutes to individually capture all the scenarios they can think of on sticky notes or in a digital whiteboard.
  2. Allow each person to quickly share their list of scenarios. Frame this step as a speed round: your goal is to get them all documented, not to discuss the implications of each one.
  3. Assign a rating to each scenario based on criteria such as:
    1. How likely it is to occur
    2. The user or persona targeted
    3. Business objectives

The last step could be done as a group, during the critique, or solo, by the designer. This activity will result in a prioritized list of possible scenarios. Use the list to account for the scenarios with the highest ratings in the next iteration of the design. Don’t try to design for every scenario, but focus on 1–3 scenarios.

A digital whiteboard showing a black-and-white wireframe design with several sticky notes describing hypothetical scenarios and questions from team members. 3 sticky notes are identified as high priority with stars.
Posting the design in a digital whiteboard allows reviewers to document their questions and hypothetical scenarios in one place. Then, the team can discuss and prioritize the top points to be included in the next iteration.

Tactic to Try: Log Questions in a Knowledge Board

Create a knowledge board to track the questions, assumptions, and hypothetical scenarios that come up during design critiques. Unlike parking lots, which usually live for the duration of a meeting, knowledge boards are meant to document knowledge gaps throughout the entire project.

A Trello board displaying four columns of information: Questions, Assumptions, Research, and Facts. Each column has one or more cards with specific information about the column it's referencing.
Keep track of hypothetical scenarios and questions about users in a knowledge board. Use a tool that your team uses often, so that the team feels comfortable adding to it.

A knowledge board will ensure that the hypothetical scenarios brought up in critiques are not lost in conversation. It can also track tested assumptions and proven facts versus broad questions or unverified hypotheses.

When the Feedback Is Subjective

“I can’t put my finger on it, but something feels off.” Have you ever gotten this kind of design feedback from a well-meaning team member or stakeholder?

The most common reason for this type of unactionable and subjective feedback is a lack of proper feedback guidelines. When stakeholders and team members don’t know where to focus, they’ll give any and all feedback and will force you to work hard to reset the conversation back to your original goals for the critique.

Another reason for unclear feedback is because nondesigners sometimes use different vocabulary than designers to express why a design doesn’t look good. What designers might pinpoint as inconsistent kerning and line height, stakeholders may describe as cluttered or needing more white space. This vocabulary isn’t wrong, but it may take additional conversation to nail down the cause of the issue.

Tactic to Try: Create Feedback Guidelines

When you start any design critique, set expectations about the kind of feedback you need; for example:

  • How the design meets or doesn’t meet user goals
  • How it utilizes brand guidelines and tone of voice
  • Specific content questions
  • Overall look-and-feel of the visual design

These feedback guidelines will enable your audience to give you actionable feedback based on your needs.

Tactic to Try: Organize Feedback

At the end of a critique or design review, you’ll likely end up with a long list of additions, changes, questions, and next steps. This list can feel overwhelming.

It’s helpful to sort your list into 3 categories:

  1. To do: This is actionable feedback that you can tackle immediately. These are suggestions you agree with and can be integrated into the design right away. Check these off your list first.
  2. To persuade: Items in this category are suggestions you don’t agree with or that contradict the goals of the project. You’ll need to make a case for each of these items and continue the conversation until you reach a compromise.
  3. To clarify: The last category is made up of items that you need additional clarification on before you can move forward. When reviewing your list of feedback post-critique, you may have forgotten the original context of the suggestion or need to follow up with someone else before you can make any changes. Once you find the answers to your questions for an item in this list, you can move it to one of the other two categories.
An open card from a Trello board displaying details of an item, including a description and next steps.
Organize feedback by prioritizing important suggestions and start building your case for each item in the To Persuade column.
An open card from a Trello board displaying details of an item, including a description and next steps.
Organize feedback by prioritizing important suggestions and start building your case for each item in the To Persuade column.

When the Conversation Goes off Topic

Sometimes, despite having a clearly defined agenda and feedback guidelines, a conversation may end up going down a rabbit hole for a number of reasons:

  • The team lacks alignment on project goals and objectives.
  • There are no clear feedback guidelines.
  • The wrong people — or dominating personalities — are in the room.
  • Designers are siloed away from the rest of the team or have an opaque design process.

These rabbit holes — feasibility concerns and implementation details, commonly — will become important at some point in the design process, but may not be relevant at this particular moment.

Tactics to Try:

  • Introduce a parking lot. A parking lot keeps the meeting on track while providing a reminder of the topics that were brought up. Make sure to follow up on these topics!
  • Involve team members and stakeholders in the design process. Bringing others into your design process will not only garner more and better ideas, but it will also align people on the specific vocabulary, process, and artifacts involved.


Don’t let the frustration of dealing with flawed feedback keep you from running regular design critiques. Keep these tactics in your designer toolbox and pull them out when you notice that your team members and stakeholders are getting in the weeds.


Erin Friess. 2008. Defending design decisions with usability evidence: a case study. CHI EA '08: CHI '08 Extended Abstracts on Human Factors in Computing Systems (April 2008). DOI: