Showing posts with label irrational escalation. Show all posts
Showing posts with label irrational escalation. Show all posts

Tuesday, March 23, 2010

I Know You Are But What Am I? (Part 10 of Cognitive Biases)

A cognitive bias is a pattern of deviation in judgment that occurs in particular situations, and boy howdy, are there a lot of them! In today’s installment, we’ll focus on illusions: the illusion of asymmetric insight, the illusion of control, illusory superiority, as well as impact bias, information bias, ingroup bias, and irrational escalation.

The illustration is by Baker & Hill from my new book Creative Project Management.

Illusion of asymmetric insight

Think about the people you know. How well do you know them? How much insight do you have into the way they think, their strengths and weaknesses, and the reasons they behave the way they do?

Now think about how well they know and understand you. Do they understand you as well as you understand them, or are their insights about you more likely to be wrong, shallow, or incomplete?

The illusion of asymmetric insight is the common belief that we understand other people better than they understand us. It happens both with individuals and with groups — do you think you understand, say, the culture of the Middle East better than Middle Easterners understand the culture of the United States?

A 2001 report (Pronin et al.) in the Journal of Personality and Social Psychology on the illusion of asymmetric insight cited six different studies that confirm the widespread cognitive bias. Like most cognitive biases, your best strategy is self-awareness. Be more modest about your knowledge about others, and assume you’re more transparent than you appear.

The Johari Window is a good tool to help you. It’s a model for mapping how well you understand yourself, how well other people understand you, and how to be more self-aware. By taking the test (and asking others to take your test as well), you’ll learn about your four selves: a public arena known to you and to others), a blind spot known to others and not to you, a façade known to you and not to others, and an unknown self hidden to all.

Related to the illusion of asymmetric insight is the illusion of transparency, the extent to which people overestimate the degree their personal mental state is known by others: “Can’t you tell I’m really upset?” This tends to be most pronounced when people are in a personal relationship.

Illusion of control

When rolling dice in craps (or, presumably, in role-playing games), studies have shown that people tend to throw harder when they want high numbers and throw softer for low ones. That’s the illusion of control, the tendency of people to believe they can control (or at least influence) outcomes even when it’s clear they cannot.

Like a lot of cognitive biases, this particular one has advantages as well as disadvantages. It’s been argued that the illusion of control is an adaptive behavior because it tends to increase motivation and persistence, and in fact the illusion of control bias is found more commonly in people with normal mental health than in those suffering from depression.

But it’s not all good news. In a 2005 study of stock traders, those who were prone to high illusion of control had significantly worse performance in analysis, risk management, and profitability, and earned less as well.

Illusory superiority

“The trouble with the world is that the stupid are cocksure and the intelligent are full of doubt,” wrote Bertrand Russell. The cognitive bias he describes is known as illusory superiority.

In a 1981 survey, students were asked to compare their driving safety and skill to other students in the same experiment. For driving skill, 93% of the students put themselves in the top 50%. For safety, 88% put themselves in the top 50%.

In intelligence, illusory superiority shows up in the Downing effect, the tendency of people with below-average IQs to overestimate their intelligence, and for people with above-average intelligence to underestimate.

Incompetence and stupidity also play into the Dunning-Kruger effect, a series of demonstrations that incompetents tend to overestimate their own skill, fail to recognize genuine skill and others, and fail to recognize their own inadequacies. As in the Downing effect, people of much higher competency levels are perversely much more self-critical.

The danger, alas, is that people tend to judge the competence of others by their degree of self-esteem, leading to situations in which incompetence can actually increase someone’s ability to get a good job.

Impact bias

Imagine that you’ve just learned your lotto ticket is the big winner, and you’ve just become a multi-millionaire. How would you feel, and how long would you feel that way?

Now imagine that instead of winning the lotto, you’ve just lost your job. How would you feel, and how long would you feel that way?

According to studies of impact bias, you’ve probably overestimated how long you’d be elated at the lotto win, and how long it’ll take you to recover emotionally from getting laid off.

People tend to have a basic “happiness set-point.” Although good and bad events can dramatically change your level of happiness, most people tend to return fairly rapidly to their emotional base states.

Information bias

"We need more study before we make a decision." Well, sometimes we do, but the big question is what good the information will do us. In an experiment involving medical students and fictitious diseases, the students looked at a diagnostic problem:

A patient’s presenting symptoms and history suggest a diagnosis of globoma, with about an 80% probability. If it isn’t globoma, it’s either popitis or flapemia. Each disease has its own treatment, which is ineffective against the other two diseases. A test called the ET scan would certainly yield a positive result if the patient had popitis, and a negative result if she has flapemia. If the patient has globoma, a positive and negative result are equally likely.

If the ET scan was the only test you could do, should you do it? Why or why not?

The majority of students opted for the ET scan, even when they were told it was costly, but the truth is that the result of the scan doesn’t matter. Here’s why:

Out of 100 patients, a total of 80 people will have globoma regardless of whether the ET scan is positive or negative. Since it is equally likely for a patient with globoma to have a positive or negative ET scan result, 40 people will have a positive ET scan and 40 people will have a negative ET scan, which totals to 80 people having globoma.

This means that a total of 20 people will have either popitis or flapemia regardless of the result of the ET scan. The number of patients with globoma will always be greater than the number of patients with popitis or flapemia no matter what the ET scan happens to show.

More information doesn’t always make a better decision. If the information isn’t relevant, more of it doesn’t help.

Ingroup bias

Most of us recognize the tendency to give preferential treatment to people we perceive to be members of our own groups. What’s interesting is the extent to which ingroup bias works even when the groups that link us are random and arbitrary: having the same birthday, having the same last digit in a Social Security number, or being assigned to a group based on the same flip of a coin.

Ingroup bias is one of the root causes of racism and other forms of prejudice, so it’s dangerous indeed. However, like with most cognitive biases, there’s an upside as well. We’re not part of a single group (black/white, American/Chinese, rich/poor) but of many different ones. That means we’re almost always able to define each other as members of at least one of our ingroups. That builds connections.

Irrational escalation

There’s the old joke about the man who accidentally dropped a quarter in the outhouse, and immediately took out a $20 bill and threw it down the hole as well. When asked why, he replied, “If I gotta go in after it, it had better be worth my while.”

An example of irrational escalation is the dollar auction experiment. The setup involves an auctioneer who volunteers to auction off a dollar bill with the following rule: the dollar goes to the highest bidder, who pays the amount he bids. The second-highest bidder also must pay the highest amount that he bid, but gets nothing in return.

Suppose that the game begins with one of the players bidding 1 cent, hoping to make a 99 cent profit. He or she will quickly be outbid by another player bidding 2 cents, as a 98 cent profit is still desirable. Three cents, same thing. And so the bidding goes forward.

As soon as the bidding reaches 99 cents, there's a problem. If the other player bid 98 cents, he or she now has the choice of losing the 98 cents or bidding $1.00, for a profit of zero. Now the other player is faced with a choice of either losing 99 cents or bidding $1.01, and only losing one cent. After this point the two players continue to bid the value up well beyond the dollar, and neither stands to profit.

The dollar auction is often used as a simple illustration of the irrational escalation of commitment. By the end of the game, though both players stand to lose money, they continue bidding the value up well beyond the point that the dollar difference between the winner's and loser's loss is negligible; they are fueled to bid further by their past investment.

Previous Installments

Part 1 — Bias blind spot, confirmation bias, déformation professionnelle, denomination effect, moral credential effect.
Part 2 — Base rate fallacy, congruence bias, experimenter’s bias
Part 3 — Ambiguity aversion effect (Ellsberg paradox), choice-supportive bias, distinction bias, contrast effect
Part 4 — Actor-observer bias, anchoring effect, attentional bias, availability cascade, belief bias
Part 5 — Clustering illusion, conjunction fallacy, cryptomnesia
Part 6 — Disposition effect, egocentric bias, endowment effect, extraordinarity bias
Part 7 — False consensus effect, false memory, Forer effect, framing, fundamental attribution error
Part 8 — Gambler’s fallacy, halo effect
Part 9 — Hawthorne effect, herd instinct, hindsight bias, hyperbolic discounting

Monday, January 4, 2010

Risk Management, Cognitive Bias, and the Global Warming Debate

The debate on global warming tends to revolve completely around the science. Is it good? Is it bad? Is it meaningful? Is it corrupt? Everyone has an opinion on the quality of the science, and once those opinions are formed, they’re almost impossible to shake.

A wide variety of potential cognitive biases complicate the picture. Notice there’s enough here for everybody — no one’s being singled out.

Base rate fallacy — ignoring statistical data in favor of particulars
Confirmation bias — interpreting information to confirm your preconceptions
Experimenter’s bias — with about sixty subsets
Focusing effect — putting too much emphasis on a single aspect of a situation or event
Framing — viewing through a perspective or approach that is too narrow
Hyperbolic discounting — the preference for more immediate payoffs over the long term
Irrational escalation — making irrational decisions based on rational decisions in the past, or to justify actions already taken
Information bias — seeking more information even when it cannot affect action or decision

…the list goes on. Recognize some of these biases? If you’re like most of us, you recognize them in the other side more than you see them in yourself or those who agree with you.

Part of the reason why cognitive bias is at work is that the question isn’t really clear. We’re all arguing about the science, though few of us are truly entitled to an educated opinion on the subject.

But what’s the question?

It’s not about whether a scientific opinion is correct or incorrect. That sort of thing only interests specialists. No, the question has to do with what (if anything) should we do about it, based on the potential cost and consequences.

In other words, it’s a question of risk management. And to the extent that it’s a question of risk management, it’s phrased wrong.

A risk, as you’ll remember, is a future event with some probability of happening that if it happens will have a meaningful impact on your situation. If the impact is negative, it’s a threat. If the impact is positive, it’s an opportunity.

Risks, like Gaul, can be divided into three parts. The first part is probability. How likely is it that the risk will happen?

The second part is impact. If the risk should happen, what would be its effects?

Those two parts combine in the formula R = P x I to calculate a risk score, the value of the risk.

We care about the value of the risk because that helps us make a rational decision about the third element: the cost of reducing or eliminating the negative risk, or the cost of obtaining or exploiting the positive risk.

Probability

The argument about the science of climate change is at root an argument about probability. The process of science involves collecting data, discovering patterns in that data, and developing and testing hypotheses and theories about that data. Over time, the process of peer review creates a consensus in the scientific community, and at any moment in time, that’s the state of scientific knowledge.

Let’s sidestep the discussion about whether the consensus of current scientific knowledge is accurate or inaccurate, and merely assess how our own feeling and opinions influence our judgment of probability. Taking as a guide the legal standards of proof, we might fall somewhere on the following spectrum. For a rough calculation, I’ve put in some percentages.

Degree of Belief (Probability You Think It's True)
True beyond any doubt (99+%)
True beyond a reasonable doubt (95%)
True by the preponderance of the evidence (75%)
Unable to tell (50%)
False by the preponderance of the evidence (25%)
False beyond reasonable doubt (5%)
False beyond any doubt (>1%)

This is about what you believe about the science, and a corresponding figure that relates to how likely it is that the threat is true. If you don't like the choices, add one of your own and choose your own probability number.

Notice that the evidence won't stand still. Over time, science will inevitably get better, regardless of your perspective. Either the evidence of catastrophic global climate change will mount so high no sane person can deny it, or global warming will become the Comet Kohoutek of crises, a non-event. Or maybe something in between.

The problem is by the time the facts become incontrovertible, the moment for decision will have passed. If we guess wrong, there are two possibilities: (a) we will be in a significantly worse position to deal with the resultant impact, or (b) we will have wasted significant resources.

Impact

This leads us to the second item, the question of impact. Impact is the effect of the threat or opportunity if it happens — even if you believe the chance is remote at best. So we have to set probability aside temporarily. We’ll come back to it in a moment.

In addition to arguments about how likely it is that the scientific consensus on global warming is in fact correct, there is a range of opinion as to what that means in practical terms: a range of impact. I've specified a set of potential impact levels and set costs for each. Remember, the issue isn't whether these are going to happen. They're simply descriptions of the potential level of impact that different parties suggest are possible.

So choose from the list below. What, in your opinion, is the worst possible potential outcome if global warming happens?

  • Catastrophic. Global warming effects will kill tens or hundreds of millions of people directly and indirectly, wipe out tens of thousands of species, and be an economic and social catastrophe to those who survive. Repair or rebuilding may or may not be possible. (Cost = $Quadrillions)
  • Serious. Major weather events, such as hurricanes and tsunamis will be more prevalent, tens and hundreds of thousands will die, economies will suffer. (Cost = $Trillions)
  • Moderate. Managing environmental issues will be a consuming issue, but better management and improved technology will make this a background costs. (Cost = $Billions)
  • Minor. Insignificant costs. (Cost < $Millions)
Notice the impact could also be positive.

Value of the Risk

Just because you aren't convinced the evidence in favor of a risk is certain doesn't mean you don't act on it. We take everyday precautions to avoid low probability or highly uncertain risks with potentially high impact all the time — every time we drive on a freeway, for example. But there's a limit. How does the value of the risk compare to the cost of mitigation?

The value of the risk, as we’ve noted, is the probability times the impact. From our earlier work, we can construct this table. The risk score in each case is what you should reasonably be willing to spend if necessary to mitigate the degree of risk you personally believe is present.

Catastrophic
95% confident, $Quadrillions
75% confident, $Low Quadrillions
50% confident, $1 Quadrillion
25% confident, $High trillions
5% confident, $Low trillions

Serious
95% confident, $Up to 1 Quadrillion
75% confident, $750 trillion
50% confident, $500 trillion
25% confident, $250 trillion
5% confident, $50 trillion

Moderate
95% confident, $Up to 1 Trillion
75% confident, $750 billion
50% confident, $500 billion
25% confident, $250 billion
5% confident, $50 billion

Minor
95% confident, $Possibly a few billion
75% confident, $Less than a billion
50% confident, $500 million
25% confident, $250 million
5% confident, $Low millions

Cost of Mitigation

The value of the risk is what you’re willing to spend if necessary. Depending on how you assessed probability and impact, you ended up with some amount of money (perhaps $0) that's appropriate as a maximum to spend on the risk.

Of course, you need to compare that to the cost of mitigating or eliminating the risk. Sometimes, it’s not worth it. If I offered to save you from a $1,000 risk in exchange for $2,000, it’s not much of a deal. In general, if the cost of getting rid of the risk exceeds the cost of living with it, you’re better off living with it.

On the other hand, if I can save you from a $1,000 risk (say, a 25% chance of losing $4,000) for only $500, that's a pretty good deal. If the risk happens, you've saved $3,500. But if the risk doesn't happen, you're still out $500.

It's true that not all costs of a risk (or costs of a risk mitigation) can be easily translated into dollar terms — or even should be. That doesn’t change the basic principle, though: the cost of dealing with the risk has to be less than the cost of living with the risk.

There’s an important qualification when it comes to risk mitigation. Some risks you can get rid of altogether if you’re willing to pay the price. Other risks you can reduce, but not eliminate. You can lower the probability of the event occurring, or you can lower the impact if it should occur.

That’s not a bad thing, mind you, but you have to take into account the residual risk when deciding if the strategy is worth it. The value of that risk is the difference between the cost of the original risk and the cost of the residual risk.

The Right Question

To have a reasoned discussion on the subject of global warming, you have to figure out where you are on five issues, not merely one.

1. How correct is the scientific consensus on global warming?
2. What is the impact of global warming if it should occur?
3. What is the value of the risk (probability times impact)?
4. What is the cost of mitigating or eliminating the risk, and how much residual risk would remain?
5. In balance, what level of action on global warming (if any) is warranted?

To change someone’s opinion, you have to change that person’s evaluation of at least one of these issues.

As people on all sides have found, it’s nearly impossible to change anyone’s evaluation of the quality of science, which is our probability benchmark. There’s often more consensus of what global warming might mean if it happens, which is why it’s so important to separate discussion of probability from the discussion of impact.

But the real opportunity has to do with the issue of cost. The best current framing of the debate comes from the argument that dealing with global warming and environmental issues can be relatively low in cost, or ideally profitable.

If the cost to deal with global warming is low enough, it's a good idea even for those who think the probability is low.