Qualitative surveys ask open-ended questions to find out more, sometimes in preparation for doing quantitative surveys. Test surveys to eliminate problems.

Sooner or later, most UX professionals will need to conduct a survey. Survey science from the quantitative side can be intimidating because it’s a specialized realm full of statistics, random selection, and scary stories of people going wrong with confidence. Don’t be afraid of doing qualitative surveys, though. Sure, it’s important to learn from survey experts, but you don’t have to be a survey specialist to get actionable data. You do have to find and fix the bugs in your questions first, however.

Quantitative vs. Qualitative Surveys

Quantitative surveys count results: how many people do this vs. do that (or rather, how many say that they do this or that). Use quant surveys when you need to ask questions that can be answered by checkbox or radio button, and when you want to be sure your data is broadly applicable to a large number of people. Quantitative surveys follow standard methods for randomly selecting a large number of participants (from a target group) and use statistical analysis to ensure that the results are statistically significant and representative for the whole population.

Qualitative surveys ask open-ended questions. Use them when you need to generate useful information via a conversation rather than a vote, such as when you’re not sure what the right set of answers might include. Qualitative surveys ask for comments, feedback, suggestions, and other kinds of responses that aren’t as easily classified and tallied as numbers can be. You can survey fewer people than in a quantitative survey and get rich data.

It’s possible to mix the two kinds of surveys, and it’s especially useful to do small, primarily qualitative surveys first to help you generate good answers to count later in a bigger survey. This one-two-punch strategy is much preferable to going straight to a closed-ended question with response categories you and your colleagues thought up in your conference room. (Yes, you could add an “other” option, but don’t count on valid statistics for options left to a catch-all bucket.)

Tips for Qualitative Surveys

Unordered lists can be more time-consuming to look through than lists that have an obvious ordering principle, but unordered lists seem to yield better answers, especially if you can sort the list differently for different respondents.

  1. Test your survey. Here’s the procedure that we recommend:
    1. Draft questions and get feedback from colleagues.
    2. Draft survey and get colleagues to attempt to answer the questions. Ask for comments after each question to help you revise questions toward more clarity and usefulness.
    3. Revise survey and test iteratively on paper. We typically do 4 rounds of testing, with 1 respondent per round. At this stage, don’t rely on colleagues, but recruit participants from the target audience. Revise between each round. Run these tests as think-aloud studies; do not send out the survey and rely on written comments — they will never be the same as a realtime stream of commentary.
    4. Randomize some sections and questions of the survey to help ensure that (1) people quitting partway through don’t affect the overall balance of data being collected, and (2) the question or section ordering doesn’t bias people’s responses.
    5. Test the survey-system format with a small set of testers from the target audience, again collecting comments on each page.
    6. Examine the output from the test survey to ensure the data gathered is in an analyzable, useful format.
    7. Revise the survey one more time.
  2. Don’t make your own tool for surveys if you can avoid it. Many solid survey platforms exist, and they can save you lots of time and money.
  3. Decide up front what the survey learning goals are. What do you want to report about? What kind of graphs and tables will you want to deliver?
  4. Write neutral questions that don’t imply particular answers or give away your expectations.
  5. Open vs. closed answers: Asking open-ended questions is the best approach, but it’s easy to get into the weeds in data analysis when every answer is a paragraph or two of prose. Plus, users quickly tire of answering many open-ended questions, which usually require a lot of typing and explanation. That being said, it’s best to ask open-ended questions during survey testing. The variability of the answers to these questions during the testing phase can help you decide whether the question should be open-ended in the final survey or could be replaced with a closed-ended question that would be easier to answer and analyze.
  6. Carefully consider how you will analyze and act on the data. The type of questions you ask will have everything to do with the kind of analysis you can make: multiple answers, single answers, open or closed sets, optional and required questions, ratings, rankings, and free-form answer fields are some of the choices open to you when deciding what kinds of answers to accept. (If you won’t act on the data, don’t ask that question. See guideline #12.)
  7. Multiple vs. single answers: Often multiple-answer questions are better than single-answer ones because people usually want to be accurate, and often several answers apply to them. Survey testing on paper can help you find multiple-answer questions, because people will mark several answers even when you ask them to mark only one (and they will complain about it). If you are counting answers, consider not only how many responses each answer got, but also how many choices people made.
  8. Front-load the most important questions, because people will quit partway through. Ensure that partial responses will be recorded anyway.
  9. Provide responses such as, “Not applicable” and “Don’t use” to prevent people skipping questions or giving fake answers. People get angry when asked questions they can’t answer honestly, and it skews your data if they try to do it anyway.
  10. People have trouble understanding required and optional signals on survey question/forms. It’s common practice to use a red asterisk “*” to mark required fields, but that didn’t work well enough, even in a survey of UX professionals — many of whom likely design such forms. People complained that required fields were not marked. Pages that stated at the top that all were required or optional also didn’t help, because many people ignore instruction text. Use “(Optional)” and/or “(Required)” after each question, to be sure people understand.
  11. When marking is not clear enough, many people feel obligated to answer optional questions. Practically speaking that means you don’t have to require every question, but you should be careful not to include so many questions that people quit the survey in the middle.
  12. Keep it short. Every extra question reduces your response rate, decreases validity, and makes all your results suspect. Better to administer 2 short surveys to 2 different subsamples of your audience than to lump everything you want to know into a long survey that won’t be completed by the average customer. 20 questions are too many unless you have a highly motivated set of participants. People are much more likely to participate in 1-question surveys. Be sensitive to what your pilot testers tell you, and realistically estimate the time to complete the survey. The more open-ended questions and complex ranking you ask people to do, the more you’ll lose respondents.
  13. People often overlook examples and instructions that are on the right, after questions. Move instructions and examples to the left margin instead (or the opposite side, for languages that read right to left), to put them in the scannability zone and place them closer to the person’s focus of attention, which is on the answer area.
  14. Use one-line directions if you can. Less is more. Just as in our original writing for the web studies, people read more text when there is a lot less of it. People complain about not getting enough information, but when it’s there they don’t read it because it’s too long.
  15. People tend not to read paragraphs or introductions. If you must use a paragraph, bold important ideas to help ensure that most people, who scan instead of reading, glean that information.
  16. Think carefully about using subjective terms, such as “essential,” “useful,” or “frequent.” Terms that cause people to make a judgment call may get at how they feel, but such questions can be confusing to evaluate logically. Ratings scales are more flexible. If you do need to know how participants perceive a certain aspect, indicate that’s what you want them to base their answer on (for example, instead of asking “Is X essential for Y?” say “Do you feel that X is essential for Y?”).
  17. Define terms as needed in order to start from a shared meaning. People might quibble about the definition, but it’s better than getting misleading answers because of a misinterpretation.
  18. Don’t ask about things that your analytics can tell you. Ask why and how questions.
  19. Include a survey professional in your test group. Your survey method may be criticized after the fact, so get expert advice before you conduct your survey.
  20. Answer ordering and first words matter, especially in long lists. Logical groupings, randomized lists, and short lists work better than long, alphabetical lists. Ordering issues can skew your data, so test alternative list orderings when you test your survey. When selecting from a list, many people choose the first thing that sounds like it might be right and go to the next question.
    • Items at the top and bottom of lists may attract more attention than items in the middle of long lists.
    • Because people scan instead of read, the first words of items in lists can cause them to overlook the right choice, especially in alphabetical lists.
  21. Test where best to place page breaks. Sometimes it’s important for people to be able to see all the topic’s questions before they answer one. Otherwise they volunteer answers for the questions they have not yet seen and write, “see previous answer” later, which adds extra interpretation steps in data analysis. To find questions with these kinds of problems, you can test the survey with each question on its own page first, and then collocate the questions that need to be shown together on one page in the next test version. In some cases, simply forcing one question to come before another one can fix these problems.
  22. If possible, don’t annoy people by asking questions that don’t apply to them. When respondents choose a particular answer, show them one or two more questions about that topic that would be applicable in that case. Choose a survey platform that allows conditional questions, so you can avoid presenting nonapplicable questions and keep your list of questions as short as possible for each respondent. If most of your questions are conditional, you might be able to put a key conditional question early in the list, then branch to different versions of the survey for the rest of the questions.
  1. Take your data with a grain of salt. Unlike for quantitative surveys, qualitative survey metrics are rarely representative for the whole target audience; instead, they represent the opinions of the respondents. You can still present descriptive statistics (such as how many people selected a specific response to a multiple-choice question) to summarize the results of the survey, but, unless you use sound statistics tools, you cannot say whether these results are the result of noise or sample selection, as opposed to truly reflecting the attitudes of your whole user population.
  2. Count whatever you can count. Researchers often refer to coding and normalizing data during analysis. Coding data is the process of making text answers into something you can count, so you can extract the bigger trends and report them in a way that makes sense to your report audience. You can capture rich textual data for understanding and quoting, and code some types of responses as 0, 1, or 2 (no, partially, yes; or none, some, all) for example, or you may be able to define many different phrases as meaning the same thing (for example when people use synonyms or express the same ideas). This coding can be done after the data is collected, in a spreadsheet.
  3. Show, don’t tell. Use lots of graphs, charts, and tables, with an executive summary of key takeaways.
  4. Consider graphs before you decide on a spreadsheet layout. Unfortunately some spreadsheets won’t make reasonable graphs until you switch columns to rows or rows to columns. It’s easiest to plan for this necessity before you analyze your data. It’s also possible to take the chart data, put it on its own spreadsheet page, and then reorder it to make the charts. Just be careful not to make data transfer errors.
  5. Beware of disappearing chart data. Some spreadsheets hide data in charts silently when font-size changes or chart-size changes are made.
  6. Don’t embed data if you can screenshot it. Screenshots (PNG format is recommended) are lovely and robust over time, unlike embedded data, which tends to cause document corruption, become unlinked, or could be changed by mistake.

Conclusion

Qualitative surveys are tools for gathering rich feedback. They can also help you discover which questions you need to ask and the best way to ask them, for a later quantitative survey. Improve surveys through iterative testing with open-ended feedback. Test surveys on paper first to save time-consuming rework in the testing platform. Then test online to see the effects of page order and question randomization and to gauge how useful the automated results data may be.