Welcome back to part two of our discussion of cognitive and decision-making biases. The series begins here.
Everyone's subject to cognitive biases of one sort or another. None of us is capable of pure objectivity; we cannot see reality without distortion. But we can try.
There are around 100 different identified cognitive and decision biases, and some of them have subsets, as we'll see shortly. Today, we'll cover three more: the base rate fallacy, congruence bias, and everyone's traditional favorite, experimenter's bias.
Base rate fallacy. There are 100 terrorists trying to sneak through airline security for every one million non-terrorists. TSA has set up an automated face recognition system that has 99% accuracy. The alarm goes off, and trained Homeland Security agents swoop down. What is the probability their captive is really a terrorist?
Well, if the failure rate is 1%, that means there’s a 99% chance the person is a terrorist, and a 1% chance that he or she is not, right? That justifies a significant assumption of guilt.
But this actually gets it backward. The chance the person isn't a terrorist is far greater — in fact, it's 99.02% likely that the new prisoner is completely innocent!
The mistake that leads to the first conclusion is called the base rate fallacy. It occurs when you don't notice that the failure rate (1 in 100) is the not the same as the false alarm rate. The false alarm rate is completely different, because there are, after all, far more non-terrorists than terrorists. Let's imagine that we walk everyone — 100 terrorist and 1 million non-terrorists, for a total of 1,000,100 people — in front of the face recognition tool. A 1% failure rate means it's going to ring incorrectly one time for each 100 passengers, 10,099 times in total. It will catch 99 terrorists and miss one, but it's also going to catch 10,000 non-terrorists. The ratio is actually 99:10,099, or a miniscule 0.98%, that the person caught is actually a terrorist.
This does not argue against the value of screening. Screening might be perfectly reasonable. Overreaction, however, is not. If you’re 99% sure you’ve caught a terrorist, you will behave differently than if you’re only 1% sure.
To avoid the base rate fallacy, look at the “prior probability.” If there were no terrorists, what would the face recognition system produce? With a 1% failure rate, it would never pick a real terrorist (there would be none), but it would trigger 10,000 false positives. Now you’ve found the missing fact.
(Footnote: Notice that the base rate fallacy only produces incorrect analysis when the scale is unbalanced, as is our case with 100 terrorists in city with a population of 1 million. As the populations approach 50/50, the failure rate and false alarm rate would converge. Mind you, we'd have different problems then.)
Congruence bias. In congruence bias, you only test your hypothesis directly, potentially missing alternative explanations. In the famous Hawthorne experiment, Frederick W. Taylor, father of Scientific Management, wanted to test whether improved lighting in factories would increase worker productivity. He performed a direct test: he measured productivity, installed better lighting, and measured productivity again. Productivity went up. If you are falling into congruence bias, you’re done. Experiment confirmed; case closed.
But Taylor avoided the trap. He tested his hypothesis indirectly. If improved lighting increased productivity, he reasoned that worse lighting should lower it. So he tested that proposition as well. He took out a lot of lights and measured again: and to everyone’s surprise, productivity went up! A deeper analysis revealed what is now known as the Hawthorne Effect: when people feel others are paying attention to them, their productivity tends to go up, at least temporarily. (It’s a huge benefit of management consultants; just by showing up, we’re likely to make things better.)
To avoid congruence bias, don’t be satisfied with direct reasoning alone. Direct confirmation asks, “If I behaved in accordance with my hypothesis, what would I expect to occur?” Indirect confirmation asks, “If I acted in conflict with my hypothesis, what would I expect to occur?” If Taylor had stopped with the first question, we’d all be fiddling with the lights. Only the second question allowed him to discover the deeper truth.
Experimenter’s bias. This bias is well known to anyone in scientific fields. It’s the tendency for experimenters to believe and trust data that agrees with their hypothesis, and to disbelieve and distrust data that doesn’t. It’s a natural enough feeling; there’s a price to pay if we’re wrong, even if it’s only a hit to our egos. It’s impossible for any human being to be completely objective. Our perceptions and intelligence are constrained, and we are looking from the inside, not the outside.
Experimenter’s bias can’t be avoided; it has to be managed instead. Last week, we discussed the “bias blind spot,” the recursive bias of failing to recognize that you have biases. Self-awareness helps. Another good technique is the “buddy system.” I frequently work with co-authors so I have someone to challenge my thinking. That reduces the problem, though it doesn’t eliminate it — wherever my co-author and I see it the same way, the risk remains.
The best technique is to understand the components of the bias. A 1979 study of sampling and measurement biases listed 56 different experimenter’s biases: the “all’s well” literature bias, the referral filter bias, the volunteer bias, the insensitive measure bias, the end-digit preference bias, and my favorite, the data dredging bias, also known as “looking for the pony.”
More next week…
No comments:
Post a Comment