A cognitive bias is a pattern of deviation in judgment that occurs in particular situations, and boy howdy, are there a lot of them! In today’s installment, we’ll focus on illusions: the illusion of asymmetric insight, the illusion of control, illusory superiority, as well as impact bias, information bias, ingroup bias, and irrational escalation.
The illustration is by Baker & Hill from my new book Creative Project Management.
Illusion of asymmetric insight
Think about the people you know. How well do you know them? How much insight do you have into the way they think, their strengths and weaknesses, and the reasons they behave the way they do?
Now think about how well they know and understand you. Do they understand you as well as you understand them, or are their insights about you more likely to be wrong, shallow, or incomplete?
The illusion of asymmetric insight is the common belief that we understand other people better than they understand us. It happens both with individuals and with groups — do you think you understand, say, the culture of the Middle East better than Middle Easterners understand the culture of the United States?
A 2001 report (Pronin et al.) in the Journal of Personality and Social Psychology on the illusion of asymmetric insight cited six different studies that confirm the widespread cognitive bias. Like most cognitive biases, your best strategy is self-awareness. Be more modest about your knowledge about others, and assume you’re more transparent than you appear.
The Johari Window is a good tool to help you. It’s a model for mapping how well you understand yourself, how well other people understand you, and how to be more self-aware. By taking the test (and asking others to take your test as well), you’ll learn about your four selves: a public arena known to you and to others), a blind spot known to others and not to you, a façade known to you and not to others, and an unknown self hidden to all.
Related to the illusion of asymmetric insight is the illusion of transparency, the extent to which people overestimate the degree their personal mental state is known by others: “Can’t you tell I’m really upset?” This tends to be most pronounced when people are in a personal relationship.
Illusion of control
When rolling dice in craps (or, presumably, in role-playing games), studies have shown that people tend to throw harder when they want high numbers and throw softer for low ones. That’s the illusion of control, the tendency of people to believe they can control (or at least influence) outcomes even when it’s clear they cannot.
Like a lot of cognitive biases, this particular one has advantages as well as disadvantages. It’s been argued that the illusion of control is an adaptive behavior because it tends to increase motivation and persistence, and in fact the illusion of control bias is found more commonly in people with normal mental health than in those suffering from depression.
But it’s not all good news. In a 2005 study of stock traders, those who were prone to high illusion of control had significantly worse performance in analysis, risk management, and profitability, and earned less as well.
“The trouble with the world is that the stupid are cocksure and the intelligent are full of doubt,” wrote Bertrand Russell. The cognitive bias he describes is known as illusory superiority.
In a 1981 survey, students were asked to compare their driving safety and skill to other students in the same experiment. For driving skill, 93% of the students put themselves in the top 50%. For safety, 88% put themselves in the top 50%.
In intelligence, illusory superiority shows up in the Downing effect, the tendency of people with below-average IQs to overestimate their intelligence, and for people with above-average intelligence to underestimate.
Incompetence and stupidity also play into the Dunning-Kruger effect, a series of demonstrations that incompetents tend to overestimate their own skill, fail to recognize genuine skill and others, and fail to recognize their own inadequacies. As in the Downing effect, people of much higher competency levels are perversely much more self-critical.
The danger, alas, is that people tend to judge the competence of others by their degree of self-esteem, leading to situations in which incompetence can actually increase someone’s ability to get a good job.
Imagine that you’ve just learned your lotto ticket is the big winner, and you’ve just become a multi-millionaire. How would you feel, and how long would you feel that way?
Now imagine that instead of winning the lotto, you’ve just lost your job. How would you feel, and how long would you feel that way?
According to studies of impact bias, you’ve probably overestimated how long you’d be elated at the lotto win, and how long it’ll take you to recover emotionally from getting laid off.
People tend to have a basic “happiness set-point.” Although good and bad events can dramatically change your level of happiness, most people tend to return fairly rapidly to their emotional base states.
"We need more study before we make a decision." Well, sometimes we do, but the big question is what good the information will do us. In an experiment involving medical students and fictitious diseases, the students looked at a diagnostic problem:
A patient’s presenting symptoms and history suggest a diagnosis of globoma, with about an 80% probability. If it isn’t globoma, it’s either popitis or flapemia. Each disease has its own treatment, which is ineffective against the other two diseases. A test called the ET scan would certainly yield a positive result if the patient had popitis, and a negative result if she has flapemia. If the patient has globoma, a positive and negative result are equally likely.
If the ET scan was the only test you could do, should you do it? Why or why not?
The majority of students opted for the ET scan, even when they were told it was costly, but the truth is that the result of the scan doesn’t matter. Here’s why:
Out of 100 patients, a total of 80 people will have globoma regardless of whether the ET scan is positive or negative. Since it is equally likely for a patient with globoma to have a positive or negative ET scan result, 40 people will have a positive ET scan and 40 people will have a negative ET scan, which totals to 80 people having globoma.
This means that a total of 20 people will have either popitis or flapemia regardless of the result of the ET scan. The number of patients with globoma will always be greater than the number of patients with popitis or flapemia no matter what the ET scan happens to show.
More information doesn’t always make a better decision. If the information isn’t relevant, more of it doesn’t help.
Most of us recognize the tendency to give preferential treatment to people we perceive to be members of our own groups. What’s interesting is the extent to which ingroup bias works even when the groups that link us are random and arbitrary: having the same birthday, having the same last digit in a Social Security number, or being assigned to a group based on the same flip of a coin.
Ingroup bias is one of the root causes of racism and other forms of prejudice, so it’s dangerous indeed. However, like with most cognitive biases, there’s an upside as well. We’re not part of a single group (black/white, American/Chinese, rich/poor) but of many different ones. That means we’re almost always able to define each other as members of at least one of our ingroups. That builds connections.
There’s the old joke about the man who accidentally dropped a quarter in the outhouse, and immediately took out a $20 bill and threw it down the hole as well. When asked why, he replied, “If I gotta go in after it, it had better be worth my while.”
An example of irrational escalation is the dollar auction experiment. The setup involves an auctioneer who volunteers to auction off a dollar bill with the following rule: the dollar goes to the highest bidder, who pays the amount he bids. The second-highest bidder also must pay the highest amount that he bid, but gets nothing in return.
Suppose that the game begins with one of the players bidding 1 cent, hoping to make a 99 cent profit. He or she will quickly be outbid by another player bidding 2 cents, as a 98 cent profit is still desirable. Three cents, same thing. And so the bidding goes forward.
As soon as the bidding reaches 99 cents, there's a problem. If the other player bid 98 cents, he or she now has the choice of losing the 98 cents or bidding $1.00, for a profit of zero. Now the other player is faced with a choice of either losing 99 cents or bidding $1.01, and only losing one cent. After this point the two players continue to bid the value up well beyond the dollar, and neither stands to profit.
The dollar auction is often used as a simple illustration of the irrational escalation of commitment. By the end of the game, though both players stand to lose money, they continue bidding the value up well beyond the point that the dollar difference between the winner's and loser's loss is negligible; they are fueled to bid further by their past investment.
Part 1 — Bias blind spot, confirmation bias, déformation professionnelle, denomination effect, moral credential effect.
Part 2 — Base rate fallacy, congruence bias, experimenter’s bias
Part 3 — Ambiguity aversion effect (Ellsberg paradox), choice-supportive bias, distinction bias, contrast effect
Part 4 — Actor-observer bias, anchoring effect, attentional bias, availability cascade, belief bias
Part 5 — Clustering illusion, conjunction fallacy, cryptomnesia
Part 6 — Disposition effect, egocentric bias, endowment effect, extraordinarity bias
Part 7 — False consensus effect, false memory, Forer effect, framing, fundamental attribution error
Part 8 — Gambler’s fallacy, halo effect
Part 9 — Hawthorne effect, herd instinct, hindsight bias, hyperbolic discounting