Sunday, January 31, 2010

Always Carry a Bomb When You Fly! (Part 8 of Cognitive Biases)

Always carry a bomb when you fly. One bomb on a plane is very improbable, so according to the law of averages, the chance of a second bomb is almost impossible!

A cognitive bias is a pattern of deviation in judgment that occurs in particular situations, and boy howdy, are there a lot of them! This week we’re covering the gambler’s fallacy and the halo effect.

Click here and scroll to the bottom for a list of previous installments.

Gambler's fallacy

The gambler’s fallacy is a cognitive bias that promotes the belief that if a random sequence shows a deviation from expected behavior, then it should be evened out by an opposite deviation in the future. But as anyone who’s thought their number was “due” knows, it ain’t necessarily so.

If you’ve flipped 5 heads in a row, the gambler’s fallacy suggests that the next coin flip is more likely to be tails than heads. And indeed, the chance of flipping 5 heads in a row is only 1/32. But the chance of flipping 4 heads and 1 tail (or any other combination of 5 heads and tails) is the same 1/32. Once four heads have been flipped, the next toss of the coin is the same 50%/50% as the others.

So far, obvious enough, but there are two related fallacies and a couple of exceptions. The reverse gambler’s fallacy is the belief that if the universe is showing a predisposition toward heads, then heads are cosmologically more likely. Assuming the coin is fair (not one of those double-headed types), that’s equally false.

The inverse gambler’s fallacy (term coined by philosopher Ian Hacking), is the fallacy of seeing an unlikely outcome of a random process and concluding that the process must therefore have occurred many times before. If you roll a pair of fair six-sided dice and get 12, it’s wrong to suppose there’s any support for the hypothesis that these dice have been rolled before.

The gambler’s fallacy doesn’t apply when the probability of different events is not independent. If you draw a card from a deck (let’s make it a 4), then the chance of drawing another 4 is reduced, and the chance of drawing a card of another rank is increased. It also doesn’t apply if the outcomes aren’t equally probable. If those six-sided dice keep rolling boxcars, after a while it’s reasonable to suspect they may be loaded.

The gambler’s fallacy is related to two other cognitive biases, the clustering illusion (part 5) and the representativeness heuristic. The latter bias is the belief that a short run of random outcomes should share the properties of a longer run. Out of 500 tosses of a fair coin, the number of heads and tails are very likely to balance out, but that doesn’t mean the same thing will hold true in a sequence of 5 or 10 tosses.

Halo effect

Some years back, I was on a seminar trip in Texas and Louisiana. A huge storm shut down air traffic, and in the process I got separated from my luggage. The next day, I had to teach a seminar in blue jeans and a day-old dress shirt. The audience was very sympathetic — there was major flooding in Baton Rouge and several of them had disaster stories of their own to tell — and the seminar went well.

When I received my evaluation statistics a couple of weeks later, I was fascinated to find that my scores had dropped nearly 25% below my averages. It was certainly understandable that my scores for “Instructor’s appearance was professional” would drop, but there were drops in “Instructor had a good command of the material” and “The workbook contained information that will be of use to me after the seminar.”

That’s the halo effect, the tendency for people to extend their assessment of a single trait so that it influences assessment of all other traits.

There are other examples. In the 46 US presidential elections where the height of both candidates are known, the taller candidate won the popular vote 61% of the time and the shorter 33% of the time. (In three cases, the candidates were of the same height, and in three other cases, the taller candidate won the popular vote but lost to the shorter candidate in the Electoral College — most recently in 2000.)

In 1977, psychologist Richard Nisbett ran a series of experiments on how students made judgments about professors, demonstrating not only how strong the effect is, but also how much people are unaware when they’re affected by it. At least five people at that seminar in Baton Rouge assured me they didn’t mind my jeans at all. Two even said they preferred a more casual look for the instructor.

But the numbers told the truth.

No comments:

Post a Comment