I recently adopted a kitten and was faced with the decision of whether to purchase pet insurance for my little fur baby. Because the cat couldn’t provide any family health history, I saw the likelihood of her having an issue as an unknown: there was no way to predict when or if something could pop up and be a major expense. This uncertainty made the decision rather difficult: I could save my insurance money and hope for my cat to remain perfectly healthy, but if a health issue were to appear I would possibly need to spend exorbitant amounts of money for treatment. After much deliberation, my cat now has her own insurance coverage.

Purchasing insurance plans is an excellent example of the prospect theory at work.

Definition: The prospect theory describes how people choose between different options (or prospects) and how they estimate (many times in a biased or incorrect way) the perceived likelihood of each of these options.

The prospect theory was proposed by psychologists Daniel Kahneman and Amos Tversky in 1979, and later in 2002 Kahneman was awarded the Nobel Prize in economics for it. (Sadly, Tversky had died when the prize was awarded.)

One of the biases that people rely on when they make decisions is loss aversion: like in the insurance example above, they tend to overweight small probabilities to guard against losses. Even though the likelihood of a costly event may be miniscule, we would rather agree to a smaller, sure loss — in the form of an insurance payment — than risk a large expense. The perceived likelihood of a major health problem is greater than the actual probability of such an event actually occurring.

We would all like to believe that we are logical decision makers. In the field of user experience, we often talk about how users weigh the expected utility of different alternatives to determine what action to take or where to go next. However, when it comes to making decisions such as whether to purchase something, make a donation, or pick a level of a service, people are highly susceptible to cognitive biases, and often don’t make the logical choice.

For example, what would you choose: to get $900 or take a 90% chance of winning $1000 (and a 10% chance of winning 0)? Most people avoid the risk and take the $900, although the expected outcome is the same in both cases. However, if I asked you to choose between losing $900 and take a 90% chance of losing $1000, most of you would probably prefer the second option (with the 90% chance of losing $1000) and thus engage in the risk-seeking behavior in the hope to avoid the loss.

Decision diagram of scenario involving sure gains
When dealing with gains, people are risk averse and will choose the sure gain (denoted by the red line) over a riskier prospect, even though with the risk there is a possibility of gaining a larger reward. Note also that the overall expected value (or outcome) of each choice is equal.
Decision diagram of scenario involving sure losses
Losses are treated in the opposite manner as gains. When aiming to avoid a loss, people become risk seeking and take the gamble over a sure loss in the hope of paying nothing. Again, both options have equal expected values.

These types of behaviors cannot be easily explained by the expected-utility approach. In both these situations, the expected utility of both choices is the same (+/-$900): the probability multiplied with the expected win. Yet people largely prefer one option over the other.

Prospect theory explains the biases that people use when they make such decisions:

  • Certainty
  • Isolation effect
  • Loss aversion

We discuss each of these biases in detail below.

Certainty

People tend to overweigh options that are certain, and are risk averse for gains. We would rather get an assured, lesser win than take the chance at winning more (but also risk possibly getting nothing). The opposite is true when dealing with certain losses: people engage in risk-seeking behavior to avoid a bigger loss.

To persuade users to take an action, consider using the certainty bias to your advantage: people would rather accept a small but certain reward over a mere chance at a larger gain. If you offer a reward for users who write a product review, for example, consider giving all reviewers a coupon for 10% their next purchase. This coupon (which would only cost you money if they return to purchase more items) would be more appealing and more effective than a sweepstake for $1000 — a reward that is large, but highly unlikely.

Screenshot of email from Aveda.com asking to write a product review
The plea to write a review for a recent purchase from Aveda would be much stronger if a sure gain was highlighted instead of the sweepstake. The subject line for the email actually did mention a receiving a free sample with the next purchase in exchange for the review, but it was not stated in the main content of the email. Hence, I did not take the time to write the review.

This bias may also explain why people often remain loyal to a specific product, service, website or other tool. We can either risk using something else that has a possibility of being better than our current method, or we can continue to use our tried-and-true tool.

Isolation Effect

The isolation effect refers to people’s tendency to disregard any elements that are common to both options, in an effort to simplify and focus on what differs.

Remembering all the details of each individual option creates too much of a cognitive load, so it only makes sense to focus on the differentiators. Discarding common elements lessens the burden of comparing alternatives, but can also lead to inconsistent choices depending on how alternatives are presented.

Daniel Kahneman and Amos Tversky presented participants with 2 scenarios. In both scenarios people were given an initial amount of money, and then had to choose between two alternatives.

Scenario 1: Participants started with $1000. They then could choose between:

  1. Winning $1000 with a 50% probability (and winning $0 with a 50% probability), or
  2. Getting another $500 for sure.

Scenario 2: Participants started with $2000. They then could choose between:

  1. Losing $1000 with a 50% probability (and losing $0 with a 50% probability), or
  2. Losing $500 for sure.

Because the initial amounts were different in the two scenarios, it turns out that the two scenarios were actually equivalent: if they chose option B in the first scenario or option D in the second scenario, the amount of money they would have in the end would be the same. (Options A and C are likewise equivalent.) However, people made opposite choices in the two scenarios: the majority chose the risk-averse option B in Scenario 1 and the loss-averse option C in scenario 2.

Changing the framing of the problem (by adjusting the initial gift and the options accordingly) led people to a different decision.

Decision diagram of 2 scenarios framed as either a gain or a loss, with initial gift amounts
When presented with each decision, people make the opposite choice based on whether the options are framed as a gain or a loss. In Scenario 1, most choose option B over A, but in Scenario 2 the majority choose option C over D to try to avoid the loss. In these scenarios, people focus on only the choice between the 2 options, and overlook the initial gift amount because it is a shared factor across the two choices. However, when taking this initial gift difference into account, it can be seen that option A is equal to option C, and option B is equal to option D — only the framing has changed!

When creating content to persuade people into making a certain choice, consider how it is framed. People can respond very differently to negatively framed messages than they would to a positively framed one. Would you rather use a service that has a 95% satisfaction rate or one that has a 5% complaint rate? The negative formulation primes people to think of the possible “loss” or negative outcome and to act accordingly.

You should also consider how information is displayed to help users identify common elements that can be safely disregarded to focus on key differentiators. For example, presenting a product configurator versus having users choose among different products as a whole. Seeing every possible final permutation of products or services may cause prospective customers to make a different decision (or just overwhelm them and lead them to abandon the task) than if they were presented one or two products and then given the option to customize them by adding features.

Another way to support this simplification process in product comparisons is to present important information side-by-side rather than only through each individual product page. Comparison tables that highlight differences work well, as long as consistent levels of detail are included for all items. One of the most common activities on the web involves users comparing and choosing between multiple products or services, so adequately supporting this task is key.

Loss Aversion

Most people will behave so that they minimize losses because losses loom larger than gains, even though the probability of those losses is tiny. The pain of losing also explains why, when gambling, winning $100 and then losing $80 feels like a net loss even though you are actually ahead by $20. People’s reaction to loss is more extreme than their reaction to gain. (The order here is also important — were we to first lose $80, then come back and win $100, it would shift our reference point and make it feel like a net gain!)

The information included on websites can play into people’s biases in order to persuade them to make a purchase or some other decision. For example, insurance websites frequently display a long list of unlikely, yet costly outcomes that we may encounter should we not buy insurance. This list primes us toward avoiding these large losses and makes us forget about the small, but regular payment that we would make indefinitely for ensuring insurance coverage.

Screenshot of pet insurance web page listing possible health expenses
Insurance companies often capitalize on our overweighing of unlikely (how many cats get brain cancer?) but costly events to persuade us to purchase coverage plans. Here, GoPetplan.com lists expensive vet bills in an effort to convince users to buy a cat insurance policy.

For products or services that do not inherently guard against large losses, we can convince users to take certain actions by understanding what their inhibitions may be. If we can uncover people’s concerns through user research, we can provide information to help them overcome those fears or objections. For example, prospective users may be unwilling to begin an application process online because they fear it would take too much time, or require information not readily available. If a website is aware of this perception, it can attempt to modify it, for instance, by stating how long the application takes on average, and what pieces of information would be needed to complete it.

Guard Users Against Negative Experiences

The prospect theory can also be extended to apply to people’s overall user experiences. We react more strongly to moments of loss — in the form of frustration or confusion that may occur during an interaction with a website or an app. When everything works as expected, people consider that the norm. But, once anything goes slightly wrong, people balk and remember those bad experiences for much longer. This is why it is so important to test everything and work hard to fix any of these small stumbling blocks. We are designing for users who are hard to please.

Conclusion

Prospect theory explains several biases that people rely on when making decisions. Understanding these biases can help persuade people to take action.

For more on the prospect theory and other biases of people’s decision-making, consider our full-day training course on The Human Mind and Usability. For more about influencing principles and persuasion techniques for the web, consider our full-day Persuasive Web Design training course.

References:

Kahneman, D., & Tversky, A. (1979). Prospect Theory: An Analysis of Decision under Risk. Econometrica, 47(2), 263-291.