People have different needs depending on what type of tasks they’re performing. Finding a specific fact (for example, what day is National Ice Cream Day) may only require a few keystrokes.

In contrast, research tasks (like understanding why people have insomnia and ways to treat it) can take several days or even weeks of work.

Three Types of Tasks

In a recent large-scale survey study, we identified three different types of online information-seeking behaviors based on their purpose:

  • Acquire: The user looks for a fact, finds product information, or downloads something. For example, one participant looked up the steps to perform CPR.
  • Compare/Choose: The user evaluates multiple products or information sources to make a decision. For example, one participant compared the prices and features of several pieces of specialized scientific equipment to decide which one to purchase.
  • Understand: The user gains understanding of some topic. For instance, one participant reported that he decided against starting the Keto diet based on the information that he had found online.

We can use a single example to differentiate these three types of info-seeking behavior. Imagine your WiFi router isn’t working. If you look up the phone number of your service provider, you are performing an Acquire task. If you decide to buy a new router and choose from several brands and models, you are doing a Compare/Choose task. If you want to know more about how a router works, you are performing an Understand task.

These 3 tasks are not equally frequent. Our research shows that critical information is more likely to be obtained through Understand and Compare/Choose activities than through Acquire activities.

In this article, we investigate the different characteristics of each of these three task types. We use data from several studies:

  • Critical-incident study. In a survey study, 498 respondents recalled a recent instance in which they found important online information that led to a significant decision or action. We also asked questions about the context of the incident: when the incident happened, which device(s) they were using, and how satisfied they were with the whole process.
  • Search study. We analyzed 471 instances of search from four different studies, including different types of tasks. Multiple measures were used to capture the search behavior, including the time until first click, number of characters in search queries, and page elements they clicked or looked at.
  • Usability-testing studies. We included data from the search tasks performed across two sets of usability-testing studies.

Acquire Tasks: Less Memorable

In our critical-incident study, we asked people when the reported incident had happened. The majority of the Acquire incidents (57%) had taken place within the past week, compared with only about a third of each of the other types of tasks. (These differences were statistically significant by N-1 Chi-Square tests—p < 0.001 between Acquire and Compare/Chooseand p < 0.001 between Acquire and Understand.) In contrast, only 8% of the Acquire instances happened more than 3 months ago, compared with 29% of Compare/Choose (p < 0.05), or 23% of Understand tasks (p < 0.05). This finding suggests that Acquire tasks are less likely to be memorable, compared with Compare/Choose and Understand.

People reported more Acquire incidents within the past week, but more Compare/Choose and Understand critical incidents in the past 3 months. This distribution likely reflects their memory for the incidents, suggesting that the Acquire activities were less easy to remember than the other activities.

Why would people be less likely to remember older Acquire activities than older Compare/Choose or Understand activities? Our hypothesis is that the Acquire activities are typically less complex and thus require less time; as a result, people don’t get a chance to commit those activities to their memory.

For example, a respondent wrote about an Understand task she performed more than 6 months ago, “Found out my baby had congenital heart disease, looked up information online, decided to have a heart scan at a year old instead of at a later age, because of the information that I found online along with the doctor.” We can imagine how much time and effort she put to understand the pros and cons of doing the heart scan at a very young age, which led to an important decision. Another respondent reported a typical Acquire task that happened within the past week: “I found a coupon from DSW online and used it in store.” Obviously, such a task required much less effort and cognitive resources compared with the previous one.

(Of course, the emotional load of the Understand activity in the example above contributed to its memorability, but we don’t necessarily have evidence for the idea that Understand or Compare/Choose tasks typically are more important or carry a larger emotional charge than Acquire tasks. All we did was to ask users to recall important online use — if any graduate students are reading this article, an interesting followup study would be to ask about emotionally impactful Internet use.)

We got some additional evidence for our hypothesis that Acquire tasks require less effort from our Search Meta-Analysis Project. This meta-analysis used a slightly different taxonomy than our critical-incident study. That taxonomy was based on task complexity and comprised 3 categories: Fact Finding, Navigation, and Research. Fact Finding;and Navigation are comparable to Acquire tasks and Research tasks can be mapped onto Compare/Choose and Understand tasks.

When we analyzed the time spent on the search-engine–results page until the 1st click on a search result, we found that people spent significantly more time in Research tasks compared with the other types of tasks combined (p < 0.01). Not only that, but they were much more likely to reformulate their query for Research tasks than for other types of tasks combined (p < 0.01) and they used more characters in their query for Research tasks (p < 0.001). Note that this analysis refers only to time spent on the search-engine page, before actually visiting any link. It’s possible that the difference will be even bigger when the totality of task is considered.

These three charts showed that people put more effort on Research-related searches than on Navigation and Fact Finding searches combined. In particular, they spent more time before the first click on the SERP, used more characters in their query, and were more likely to reformulate Research-related queries than other queries.

Acquire Tasks: Increased User Satisfaction

In our critical-incident study, we asked participants to rate their satisfaction with the sites or apps that they had used in the reported incident on a scale from 1 (very unsatisfied) to 5 (very satisfied).

Respondents who reported Acquire tasks were more satisfied with the sites they had used compared to those who conducted Understand or Compare/Choose tasks. (Both these differences were statistically significant at the p < 0.05 level.)

However, that difference was not large — the Acquire tasks were only rated 0.2 higher than the other two categories. Additionally, the average satisfaction scores for all three categories were fairly high — over 4. This high satisfaction might be due to survivor bias: all reported incidents were instances of successful information seeking that led to significant decision making (because that’s what people were asked to recall). Thus, the cases where people couldn’t find information weren’t included in this study.

Compared to Acquire tasks, respondents were less satisfied with the apps and sites they used during Compare/Choose and Understand activities.

Top User Expectations for Different Task Types

To understand users’ expectations for different types of tasks, we used thematic analysis to analyze the 190 free-form comments that people gave about the sites or apps they used during the reported critical incidents. For each type of tasks, we identified the top mentioned issues. Note that these expectations are not mutually exclusive: for example, both Acquire and Understand incidents included comments requesting plain language, but this issue was more frequently mentioned in relation with Acquire tasks. Thus, understanding top expectations for different task types can help designers avoid most common mistakes for each type.

Acquire Tasks: Fast, Direct, Simple

  1. Fast processes and few clicks

    When people try to acquire a fact, document, or other piece of information, they want the process to be fast, with few steps and clicks. This is beneficial for all searching activities, but when users performed Acquire tasks, they seemed to be the least patient according to our research. They mentioned “faster loading time” and “fewer clicks” more compared to people performing Compare/Choose or Understand tasks. (These differences were statistically significant — p < 0.01 between the numbers of these comments for Acquire and Compare/Choose and p < 0.001 between Acquire and Understand.)

    For instance, a respondent wrote “I’m a lawyer and use the internet to search real-estate titles and access other information. I wish there were fewer clicks to perform repetitive procedures.” Another respondent reported she used the internet to look for a discount for dinner, which saved a significant amount of money, and that she hoped that “it was easier to find the info.”

  1. Direct answer right away

    The need for direct answers was mentioned significantly more by users performing either Acquire or Understand tasks than by those reporting Compare/Choose tasks (for both comparisons, p < 0.01). But their focuses were slightly different. For Acquire tasks, users wanted “direct information” in general; for Understand tasks, people mentioned “specific information” tailored to their topic of interest.

    A user who tried to find “how to reset the mileage alert for the oil change on [her] car” reported “I want to find the information a little faster and [I want] more direct information for what I searched for.”

    Search-engine features such as featured snippets and knowledge panels cater to this user need of getting factual information fast: they show the answer to a fact-finding question right on the SERP, saving users any extra clicks or reading.

    In our study on the usability of children’s sites, kids took advantage of the featured snippet on Google.com to quickly answer the question “How do you know the colors of dinosaurs.”
  1. Simple and plain language

    A respondent wrote, “I was looking up chickens, and online information helped me decide if one is a rooster or a hen.” She wanted “information [that was] easier to read.”

    One of our usability-test participants was asked to find out what gerrymandering is. She searched for that term, but the complicated wording used by Wikipedia overwhelmed her. She gave up reading and turned to the Simple English Wikipedia (Wikipedia with easy-to-understand explanations), which she found easy to understand. She commented on the first Wikipedia page: “There were all the words I am not familiar with. The way it started off just made me decide not to read it.” Here are the two definitions:

    • Wikipedia: “Gerrymandering is the practice of setting boundaries of electoral districts to favor specific political interests within legislative bodies, often resulting in districts with convoluted, winding boundaries rather than compact areas.
    • Simple English Wikipedia: “Gerrymandering is when a political group tries to change a voting district to create a result that helps them or hurts the group who is against them.”

    Users don’t expect to put much time and effort into Acquire tasks. If your site mainly facilitates these tasks, investigate how your target user group asks questions. Use plain language ;that can be easily understood to increase the chance that your content can be consumed by a large audience.

    However, it is worthwhile to point out that participants in our study also demanded easy-to-read language when performing Understand tasks, and there was no significant difference between these two groups in terms of the proportion of this demand. This finding makes sense, given that plain language benefits users in all types of tasks.

Compare/Choose Tasks: More Information, More Support Tools

Here are the top comments received for Compare/Choose tasks:

  1. Comprehensive information and descriptions, from both companies and customers,in a variety of formats (e.g., pictures and videos)

    Several participants in our critical-incident study complained about insufficient information:

    • “My mom is going to have major surgery, so I went online to find her surgeon. I would add anonymous personal statements from other patients to [the surgeon’s] webpage.”
    • “Choosing my first apartment. I wish there were better pictures to more accurately represent all of the apartments.”
    • “We were evaluating our cable/Internet service. We researched options online to determine the best fit for our needs. It’s hard to find objective, unbiased information on most products and services. Most of the information available is a sales spiel.”

    When people are making Compare/Choose decisions, information from multiple perspectives is highly appreciated. For example, when we asked usability-testing participants to read an article on the Sonos One speaker and decide whether they would like to purchase it, many checked the user comments on the article page or even went to Amazon and Reddit to view the user reviews of that product. People were, however, sensitive to “marketese” and didn’t trust content that sounded too geared towards making a sale.

    Though people performing all types of tasks appreciated the usage of media, requesting information from multiple perspectives was only mentioned by users performing Compare/Choose tasks. Thus, giving out a variety of information can help them make their choices confidently.

  1. Presenting the key information first

    One top demand of Compare/Choose tasks was putting the pricing information upfront. This is a unique need for users doing comparisons — they want to optimize their efficiency by minimizing the effort of finding key information for each product.

    For example, a respondent who was seeking information about nursing homes and assisted living, wanted the sites to put the cost on the main page. Another user who tried to buy a pair of eyeglasses on the Warby Parker site wanted to know the cost of the different available lenses — the website only showed pictures of glasses on the main page, with no pricing information.

    Hiding desired information is dangerous. Users may perceive the unseen information as nonexistent and leave your site easily because they don’t see the information they want.

    Warbyparker.com. The pricing information of glasses was not visible on the search-results page. Users had to visit the product detail page to see the price of each pair of eyeglasses.
  1. Comparison tables

    Good comparison tables can solve the main pain point of users performing Compare/Choose tasks — make comparisons across key product parameters.

    Here are some user comments:

    • “Used the internet to search 8-10 companies for prices on kitchen appliances. Then created a spreadsheet to monitor the prices of each store for 1 year. I would love to make it do everything on its own so that I didn’t have to check prices each week.”
    • “Was researching the best elliptical machine for my specific needs. I wish they could make it easier to compare different elliptical machines versions/brands on one screen.”

    A usability-testing participant who was asked to learn about Wells Fargo credit cards praised the product-comparison feature:

    “[The credit-card comparison table] is great, for bolding key features. It has all the details, and they are organized in the level that I can compare across the screen, like how long [before] the APR may change for each card.”

    Wellsfargo.com: One study participant praised the credit-card comparison table.

For Compare/Choose tasks, investigate what information your customers value most when making their decisions. Present it clearly and design tools to help them compare items and choose the right one. These tools can reduce their cognitive load.

Understand Tasks: Organized and Comprehensive Information, Fewer Ads

Here are the top comments provided by those reporting Understand tasks:

  1. Clear and well-organized content

    When people are engaged in online research activities, they are already working hard to gain knowledge on an unfamiliar topic; the content and its formatting should facilitate learning.

    A respondent who tried to understand why she has difficulty swallowing commented: “I had to comb through paragraphs of info. Perhaps it would be better with an itemized list with bullet points and fewer paragraphs.”

    Creating subheadings that promote effective scanning and chunking the content can help people better understand the page structure, locate the information they want, and consume the content.

    During one of our usability tests, users appreciated the Humira site for the clear presentation of the drug’s side effects. The subtitle What should I watch for AFTER starting HUMIRA? was red and bolded, contrasting with the main body of the content. Also, the keywords of each side effect were bolded and easy for the eyes to locate.

    Humira.com. Users praised its clarity and well-organized page layout, with bullet points, bolded titles and keywords, which made them understand the content easily.

    The clarity of information was mentioned by significantly more people performing Understand tasks than by users engaged in the other two types of tasks (for both comparisons, p < 0.05). This finding is likely due to the importance of reading in Understand tasks; thus, clear and well-organized content would help them out most.

  1. Centralized information

    Though Understand tasks often include combining information from multiple sites, users still wish that all info was on one site.

    For example, a respondent who searched for lists of unhealthy foods commented: “I want one page with all info instead of needing to click several times for additional information.”

    When a usability-testing participant wanted to learn more about what to do in the Summer Palace in Beijing, she visited three different websites to gain a full understanding: a long article introducing different tourist spots and their historical background, a short article with pictures and names of the spots together, and a map website. She said, “I would like one website to have all the information I looked up together.”

    The centralization of information was requested by users performing both Acquire and Understand tasks: providing related information on your sites can dramatically reduce the user effort, no matter simple or difficult tasks they may be performing.

  1. Fewer ads

    Complaining about ads was a theme across all types of tasks, but users performing Understanding tasks specifically pointed out them as distracting. For example, a user doing research for her school project pointed out: “The ads were a bit distracting from the important information I was trying to read and understand.”

    To avoid annoying your users with ads, put them in expected places and provide high-quality content before showing ads.

    During one of our usability sessions, a participant was asked to research how to compost at home. On the EarthEasy website, the landing page showed a pretty background picture with two buttons Read Guide and Shop Products. She hesitated to click Read guide for several seconds and ended up leaving the site without scrolling or clicking any of the buttons. “I am skeptical about it,” she said, “Shop products and Read guide…I don’t want to read an ad. Give me what I need, and then an ad is okay.” She enjoyed reading another article on Better Homes & Gardens. She didn’t complain about ads at all, because the page placed the ads in the right sidebar and only offered links to promoted items in context.

    Eartheasy.com: The lack of visible useful content made a user think that the content on this site was all promotional.
    Bhg.com. This informative article placed the ads in expected positions (right rail and top of the page) and only linked to promoted items within the content. A user read it all and believed it very helpful, without complaining about the ads.

Conclusion

People seek information online all the time. However, our research showed that they spent various degrees of attention on different types of info-seeking tasks. Easier fact-finding tasks, generally involving less time, were less memorable. In contrast, research-heavy tasks, including Compare/Choose and Understand, required more effort: more thinking time before clicking, more characters in the search query, and more query reformulations. These tasks were more likely to be recalled after a long time, and users were less satisfied with how they were supported during these tasks. Thus, it’s crucial to support not only the basic and frequent daily uses of your site. The success rate of complex tasks, which may happen less often, can impact the impression of the users on your site more profoundly.

Our study also revealed that the top user expectations for each task type varied. For Acquire tasks, the most common user expectations were to be able to get the facts faster, more direct with easy-to-understand language. Users appreciated information from multiple perspectives with support tools when performing Compare/Choose tasks. Organized and comprehensive information is more valued for Understand tasks.

This doesn’t mean that you don’t need to consider expectations of fact-finding tasks if you are designing for research-based ones: there are rarely complex tasks that purely fall into a single task type. Research tasks are often built upon multiple simple tasks — for example, comparing and choosing from several brands of routers can be divided into “finding facts about Model A and B,” “synthesize the information,” “make the comparison and weight the pros and cons,” and “make the decision.” Thus, the expectations for simple tasks should be viewed as the baselines for complex ones. Furthermore, designers should investigate the user journey of complex tasks to facilitate the transition of simple subtasks and support possible back and forth.