Showing posts with label Cognitive bias. Show all posts
Showing posts with label Cognitive bias. Show all posts

Friday, November 2, 2012

Predictions are Hard (Especially About the Future)

While famous malapropist Yogi Berra is most often cited for the quote, “Prediction is very difficult, especially about the future,” it appears that the source was actually Danish physicist and Nobel Prize laureate Niels Bohr. Bohr, whose pioneering work in quantum physics would naturally equip him with a keen sense of the limits of knowledge, also had a sense of humor. (He also said, “An expert is a man who has made all the mistakes that can be made in a very narrow field.”)

Bias and Accuracy

In my long study of cognitive biases on this blog and in my compilation Random Jottings 6: The Cognitive Biases Issue, I was struck again and again by how many of the biases had to do with perceptions of probability. From ambiguity aversion to the base rate fallacy to the twin problems of the gambler’s fallacy and the ludic fallacy, we have repeatedly shown ourselves to be incapable of judging probabilities with any degree of precision or understanding. When people rate their own decisions as "95% certain," research shows they're wrong approximately 40% of the time.

With the 2012 presidential election only four days away as I write this, the issue of prediction and forecasting is uppermost in the minds of every partisan and pundit. Who will win, and by how much? Checking the polls as I write, the RealClearPolitics average gives President Obama a 0.1% lead over Governor Romney (47.4% to 47.3%). Rasmussen has Romney up by 2 (49% to 47%), Gallup by 5 (51% to 46%), and NPR by 1 (48% to 47%). On the other hand, ABC/Wash Post and CBS/NY Times both have Obama leading by 1 (49% - 48% for ABC, 48% - 47% for CBS), and the National Journal has Obama up by 5 (50% - 45%). No matter what your politics, you can find polls to encourage you and polls to discourage you about the fate of your preferred candidate.

Some polls normally come with qualifications. Rasmussen traditionally leans Republican; PPP often skews Democratic. That doesn't means either poll is irrelevant or useless. Accuracy and bias are two different things. Bias is the degree to which a poll or sample leans in a certain direction. If a study comparing Rasmussen or PPP polls to the actual election results shows that Rasmussen's results tend to be 2% more toward the Republican candidate (or vice versa for PPP), both polls are quite useful — you just have to adjust for the historical bias. If on the other hand a poll overestimates the Democratic vote by 10% in one election and then overestimates the Republican vote by 10% in another election, there's no consistent bias, but the poll's accuracy is quite low. In other words, a biased poll can be a lot more valuable than an inaccurate one.

Selection Bias

Of course, political polls (or polls of any sort) are subject to all sorts of error. My cognitive biases entry on selection bias summarizes common concerns. For instance, there’s a growing argument that land-line telephone polls, once the gold standard of scientific opinion surveys, are becoming less reliable. Cell phone users are more common and skew toward a different demographic. There's also a sense that people are over-polled. More and more people are refusing to participate, meaning that the actual sample becomes to some extent self-selected: a random sample of people who like to take polls. People who don’t like to take polls are underrepresented in the results, and there’s no guarantee that class feels the same as the class answering. (I myself usually hang up on pollsters, and I've often thought it might help our political process if we agreed to lie to pollsters at every opportunity.)

Selection bias can happen in any scientific study requiring a statistical sample that is representative of some larger population: if the selection is flawed, and if other statistical analysis does not correct for the skew, the conclusions are not reliable.

There are several types of selection bias:

  • Sampling bias. Systemic error resulting from a non-random population sample. Examples include self-selection, pre-screening, and discounting test subjects that don’t finish.
  • Time interval bias. Error resulting from a flawed selection of the time interval. Examples include starting on an unusually low year and ending on an unusually high one, terminating a trial early when its results support your desired conclusion or favoring larger or shorter intervals in measuring change.
  • Exposure bias. Error resulting from amplifying trends. When one disease predisposes someone for a second disease, the treatment for the first disease can appear correlated with the appearance of the second disease. An effective but not perfect treatment given to people at high risk of getting a particular disease could potentially result in the appearance of the treatment causing the disease, since the high-risk population would naturally include a higher number of people who got the treatment and the disease.
  • Data bias. Rejection of “bad” data on arbitrary grounds, ignoring or discounting outliers, partitioning data with knowledge of the partitions, then analyzing them with tests designed for blindly chosen ones.
  • Studies bias. Earlier, we looked at publication bias, the tendency to publish studies with positive results and ignore ones with negative results. If you put together a meta-analysis without correcting for publication bias, you’ve got a studies bias. Or you can perform repeated experiments and report only the favorable results, classifying the others as calibration tests or preliminary studies.
  • Attrition bias. A selection bias resulting from people dropping out of a study over time. If you study the effectiveness of a weight loss program only by measuring outcomes for people who complete the whole program, it’ll often look very effective indeed — but it ignores the potentially vast number of people who tried and gave up.

Unskewing the Polls

In general, you can’t overcome a selection biases with statistical analysis of existing data alone. Informal workarounds examine correlations between background variables and a treatment indicator, but what’s missing is the correlation between unobserved determinants of the outcome and unobserved determinants of selection into the sample that create the bias. What you don’t see doesn’t have to be identical to what you do see. That doesn't stop people from trying, however.

With that in mind, the website unskewedpolls.com, developed by Dean Chambers, a Virginia Republican, attempts to correct what he sees as a systematic bias as to the proportion of Republicans and Democrats in the electorate. By adjusting poll results that in Chambers’ view are oversampling Democrats, he concludes (as of today) that Romney leads Obama nationally by 52% - 47%, a five point lead, and that Romney also leads in enough swing states that Chambers projects a Romney landslide in the electoral college of 359 to 179, with 270 needed for victory.

Chambers argues that other pollsters and analysts who show an edge for Obama are living in a “fantasy world.” In particular, he trains his disgust on Nate Silver, who writes the blog FiveThirtyEight on the New York Times website, describing him as “… a man of very small stature, a thin and effeminate man with a soft-sounding voice that sounds almost exactly like the ‘Mr. New Castrati’ voice used by Rush Limbaugh on his program. In fact, Silver could easily be the poster child for the New Castrati in both image and sound. Nate Silver, like most liberal and leftist celebrities and favorites, might be of average intelligence but is surely not the genius he's made out to be. His political analyses are average at best and his projections, at least this year, are extremely biased in favor of the Democrats.” (You may notice a little bit of ad hominem here. Clearly a short person with an effeminate voice can’t be trusted.)

A quick review of the types of selection bias above will identify several problems with the unskewed poll method. Indeed, it's hard to find anyone not wedded to the extreme right who's willing to endorse Chambers' methodology. The approach is bad statistics, and would be equally bad if done on behalf of the Democratic candidate.

Nate Silver and FiveThirtyEight

Other views of Nate Silver are a bit more positive. Silver first came to prominence as a baseball analyst, developing the PECOTA system for forecasting performance and career development of Major League Baseball players, then won some $400,000 using his statistical insights to play online poker. Starting in 2007, he turned his analytical approach to the upcoming 2008 election, and predicted the winner of 49 out of 50 states. This resulted in his being named one of the world’s 100 most influential people by Time magazine, and his blog was picked up by the New York Times. (He's also got a new book out, The Signal and the Noise: Why So Many Predictions Fail — But Some Don't. I recommend it.)

As of today, Nate Silver’s predictions on FiveThirtyEight differ dramatically from the UnSkewedPolls average. Silver predicts that Obama will take the national popular vote 50.5% to 48.4%, and the electoral college by 303 to 235. One big difference between Dean Chambers and Nate Silver is that Chambers is certain, and Silver is not. He currently gives Obama an 80.9% chance of winning, which means that Silver gives Romney a 19.1% chance of victory using the same data.

This 80% - 20% split is known to statisticians as a confidence interval, a measure of the reliability of an estimate. In other words, Silver knows that the future is best described as a range of probabilities. Neither he, nor Chambers, nor you, nor I “know” the outcome of the election that will take place next Tuesday, and we will not “know” until the votes have been counted and certified (and any legal challenges resolved).

Predictions vs. Knowledge

In other words, when we predict, we do not know.

Keeping the distinction straight is vital for anyone whose job includes the need to forecast what will happen. Lawyers don’t “know” the outcome of a case until the jury or judge renders a verdict and the appeals have all been resolved. Risk managers don’t “know” whether a given risk will occur until we’re past the point at which it could possibly happen. Actuaries don’t “know” how many car accidents will take place next year until next year is over and the accidents have been counted. But lawyers, risk managers, actuaries — and pollsters — all predict nonetheless.

A statistical prediction, by its very nature, contains uncertainty and should therefore be expressed in terms of the degree of confidence that the forecaster has determined. “The sun’ll come out tomorrow,” sings Annie in the eponymous musical, and she’s almost certainly right. But that’s a prediction, not a fact. While the chance of the Sun going nova are vanishingly small, they aren’t exactly zero.

Confidence Level and Margin of Error

Poll results usually report both a confidence level and a range of error, such as “95% confidence with an error of ±3%.” The error rate is the uncertainty of the measurement itself. If we flip a coin 100 times, the theoretical probability is 50 heads and 50 tails, but if it came out 53 heads and 47 tails (or vice versa), no one would be surprised. That’s equivalent to an error of ±3%. In other words, a small wobble in the final number should come as a shock to no one.

The confidence level, on the other hand, is the degree of confidence you have that your final number will stay within the error range. The probability that an honest coin flipped 100 times would produce 70 heads and 30 tails is low, but it’s within the realm of possibility. In other words, the “95% confidence” measurement tells us that 95% of the time, the actual result should be within the margin of error — but that 5% of the time, it will fall outside the range. (There’s a bit of math that goes into measuring this, but it's outside the scope of this piece.)

Winning at Monte Carlo

Nate Silver’s 80% confidence number comes from using a modeling technique known as a Monte Carlo simulation, which is also used in project management as a modern and superior alternative to the old PERT calculation, a weighted average of optimistic, pessimistic, and most likely outcomes. In a Monte Carlo simulation, a computer model runs a problem over and over again in thousands of iterations, choosing random numbers from within the specified ranges, and then calculates the result. If the polls are right 95% of the time within a ±3% margin of error, the program chooses a random number within the error range 95% of the time, and 5% of the time chooses a number outside the range, representing the probability that the polls could be all wet. In running five or ten thousand simulations, the results gave the victory to Obama 80.9% of the time, and to Romney 19.1% of the time.

Tomorrow, the answer may be different. Silver will enter new data, and the computer will run five or ten thousand more simulations. Each day, the probability of winning or losing will change slightly, until the final results are in and the answer is no longer a matter of probability but a matter of fact.

The Thrill of Victory and the Agony of Defeat

Astute readers may notice the parallels here to Schrödinger's Cat, which is mathematically both alive and dead until the box is opened. Personally, I put a lot of credence into Silver’s analysis; his approach is in line with my understanding of statistics. That means I think Obama is very likely to win next Tuesday — but only within a range of probability.

I will also note that Nate Silver seems to feel the same way. He's just been chided by the public editor of the New York Times for making a $2,000 bet with "Morning Joe" Scarborough that Obama will win. Given his estimate of an 80% - 20% chance of an Obama victory, that sounds like a pretty good bet to me.

But we won't know until Tuesday night at the earliest. So be sure to vote.

Tuesday, August 28, 2012

Fallacy Fallacy (Formal Fallacies Part 2)


Formal fallacies are arguments that are always wrong, regardless whether the argument's premises (statements claimed as fact) are true or false. In the previous installment, the appeal to probability, a claim that because something could happen, therefore it will happen is false even if it's true that the something in question could indeed happen.

Argument from Fallacy

If an argument contains a fallacy, what does that say about the conclusion? Actually, it doesn’t say very much. Excessively pointing to fallacies can itself trigger a fallacy of its own: the argument from fallacy, or the fallacy fallacy.

The argument from fallacy is the error of concluding that if an argument can be shown to be fallacious, that means its conclusion necessarily must be false. The form of the argument is:
If P, then Q
P is a fallacious argument.
Therefore, Q is false.
Take, for example, the following claim: “I speak English, therefore I am an American citizen.” That’s a fallacious argument, because many people who speak English are not American citizens. To conclude, however, that because the argument is fallacious, you must not be an American citizen, is taking the claim a step too far. A conclusion can be right even if the argument supporting it happens to be wrong.

If you can show that a particular argument is fallacious, the only thing that means is that the particular argument can’t be used to prove the proposition. The opposite argument, that the fallacious argument itself disproves the proposition, is also a fallacy.

The argument from fallacy is also known as the argument to logic (argumentum ad logicam) and the fallacist’s fallacy. It’s part of a group of fallacies known as fallacies of relevance.

Base Rate Fallacy
Conjunction Fallacy

The base rate fallacy and the conjunction fallacy also fall into the category of cognitive bias, and were both treated earlier in this blog and in my compilation of cognitive biases, published separately.

Tuesday, August 14, 2012

The Drake Equation (Formal Fallacies, Part 1)

Frank Drake
In February, I completed a 25-part series on red herrings, a category of argumentative fallacies that are intended to distract from the argument, rather than address it directly. That's only one category of argumentative fallacy. In this series, we'll look at formal fallacies. Formal fallacies are errors in basic logic. You don't even need to understand the argument to know that it is fallacious. Let's start with the appeal to probability.

Appeal to Probability

If I play the lottery long enough, I'm bound to win, and I can live on the prize comfortably for the rest of my life! Yes, it's possible that if you play the lottery, you'll win. Somebody has to. The logical fallacy here is to confuse the possibility of winning with the inevitability of winning. Of course, that doesn't follow.

In our study of cognitive bias (also available in compiled form here), we learned that numerous biases result from the misapplication or misunderstanding of probability in a given situation. Examples include the base rate effect, the gambler's fallacy, the hindsight bias, the ludic fallacy, and overall neglect of probability. Use the tag cloud to the right to learn more about each. We are, as a species, generally bad at estimating probability, especially when it affects us personally.

Various arguments about the Drake equation can fall into this trap. The Drake equation, developed by astrophysicist Frank Drake in 1961, provides a set of guidelines for estimating the number of potential alien civilizations that might exist in the Milky Way galaxy. Here's the formula:


N = R^{\ast} \cdot f_p \cdot n_e \cdot f_{\ell} \cdot f_i \cdot f_c \cdot L



in which:
N = the number of civilizations in our galaxy with which communication might be possible;
R* = the average rate of star formation per year in our galaxy
fp = the fraction of those stars that have planets
ne = the average number of planets that can potentially support life per star that has planets
fℓ = the fraction of the above that actually go on to develop life at some point
fi = the fraction of the above that actually go on to develop intelligent life
fc = the fraction of civilizations that develop a technology that releases detectable signs of their existence into space
L = the length of time for which such civilizations release detectable signals into space
There are various arguments about the Drake equation. Some argue for additional terms in the equation, others point out that the value of many of the equation's terms are fundamentally unknown. There's a reasonable argument to be made that "N" has to be a fairly low number, on the simple grounds that we have not yet detected any extraterrestrial civilizations. Depending on the assumed values of the terms in the equation, you can derive conclusions that range from the idea that we're alone in the galaxy (see the Fermi Paradox) to an estimate that there may be as many as 182 million alien civilizations awaiting our discovery (or their discovery of us).

From a fallacies basis, however, the problem comes when people argue that the vast number of stars makes it certain that alien civilizations exist. As much as I'd personally prefer to believe this, the logic here is fallacious. Probable — even highly probable — doesn't translate to certainty.

That's not an argument against the Drake equation per se, but merely a problem with an extreme conclusion drawn from it. The Drake equation was never intended to be science, but rather a way to stimulate dialogue on the question of alien civilizations.

Tuesday, May 29, 2012

Why We Need Hokum (Part 4)

Anterior Cingulate Cortex

I thought I was done with “Why We Need Hokum” at the end of our third installment (here are parts one, two, and three), but that doesn’t seem to be the case.

Getting back to the topic of hokum and why we need it, I’ve been interested in the various pieces of research that suggest an actual structural difference between the minds of conservatives and liberals, and not just because of the schadenfreude.

A 2008 study, “The Secret Lives of Liberals and Conservatives: Personality Profiles, Interaction Styles, and the Things They Leave Behind,” by Dana Carney, John Jost, Samuel Gosling, and Jeff Potter, published in Political Psychology, provides this helpful table combining the results of 27 different studies covering the period 1930 to 2007. (I use this particular example because it has both positive and negative traits listed for both sides.)

Personality Traits Theorized to be Associated with Liberal (or Left-Wing) and Conservative (or Right-Wing) Orientation                                                                              



Liberal/Left-Wing
  • Slovenly, ambiguous, indifferent
  • Eccentric, sensitive, individualistic
  • Open, tolerant, flexible
  • Live-loving, free, unpredictable
  • Creative, imaginative, curious
  • Expressive, enthusiastic
  • Excited, sensation-seeking
  • Desire for novelty, diversity
  • Uncontrolled, impulsive
  • Complex, nuanced
  • Open-minded
  • Open to experience



Conservative/Right-Wing
  • Definite, persistent, tenacious
  • Tough, masculine, firm
  • Reliable, trustworthy, faithful, loyal
  • Stable, consistent
  • Rigid, intolerant
  • Conventional, ordinary
  • Obedient, conformist
  • Fearful, threatened
  • Xenophobic, prejudiced
  • Orderly, organized
  • Parsimonious, thrifty, stingy
  • Clean, sterile
  • Obstinate, stubborn
  • Angry, aggressive, vengeful
  • Careful, practical, methodical
  • Withdrawn, reserved
  • Stern, cold, mechanical
  • Anxious, suspicious, obsessive
  • Self-controlled
  • Restrained, inhibited
  • Concerned with rules, norms
  • Moralistic
  • Simple, decisive
  • Closed-minded


You’ll notice a fair amount of redundancy in both lists, the result of combining characteristics listed in different studies. Summarized in line with the “Big Five” framework of personality dimensions, it works out this way:


Characteristic   Liberals   Conservatives
Openness to Experience  High   Low
Conscientiousness  Low   High
Extraversion   High   Low in two categories
Agreeableness   Not listed   Mixed
Neuroticism   Not listed   High in two categories


Agreeableness and neuroticism don’t seem to be correlated with political belief, and while extraversion appears multiple times in the liberal mindset, its reverse appears only twice in the conservative mindset. The characteristics of openness to experience and conscientiousness, however, appear more consistently correlated with political attitude.

Evidence that these personality differences are innate comes from a 2006 longitudinal study in the Journal of Research in Personality, in which “preschool children who later identified themselves as liberal were perceived by their teachers as: self-reliant, energetic, emotionally expressive, gregarious, and impulsive. By contrast, those children who later identified as conservative were seen as: rigid, inhibited, indecisive, fearful, and overcontrolled.”

More powerfully, it seems that actual brain structure differs in self-identified liberals and conservatives. In a 2011 study by cognitive neuroscientist Ryota Kanai (“Political Orientations Are Correlated with Brain Structure in Young Adults” in Current Biology) MRI scans revealed that self-identified conservative brains had a larger amygdala, associated with greater sensitivity to fear and disgust in emotional learning, and liberals had an increase in the anterior cingulate cortex, associated with monitoring uncertainty and handling conflicting information. Numerous other studies have produced similar results; Wikipedia summarizes them here.

While I’m persuaded that there are significant differences in human brains, and that differences in brain structure can naturally express themselves in terms of political leaning, I’m less persuaded (schadenfreude notwithstanding) that things line up so neatly by political party. I know political conservatives who are slovenly, creative, impulsive, and nuanced, and political liberals who are trustworthy, conventional, fearful, anxious, and closed-minded. A Republican alliance that mixes Western libertarians with Southern religious conservatives is hardly one-dimensional; the Democratic alliance is equally diverse. Still, trends are trends, and what may be untrue of individuals may yet be statistically descriptive of the group to which they belong.

One characteristic of hokum is that it’s simplistic: hokum strips away complexity and nuance and substitutes the comfort of concrete knowledge — even if it’s false. While it’s tempting to allocate the need for hokum to whichever political party we personally disfavor, I think it’s more accurate to say that both conservatives and liberals embrace hokum in some areas and reject it in others, depending on what fits the mental narrative that gives us greatest comfort. A belief in the perfectability of man is as much hokum as a belief in the inherent evil of the species. It’s easy enough to provide examples in support of either proposition; the truth is mixed.

In our long discussion of cognitive bias and decision disorders, we learned that no human being is, or possibly can be, free of such distortions. Bias is inherent, but that doesn’t mean all bias is equal. The attempt to be even-handed and accurate, to identify and fight sources of bias within one’s own thinking, is a noble and useful effort even when it’s doomed to failure.

The research on brain functioning has often been reported as a science-based dissing of conservative mental attitudes, but that’s not fair. Both the amygdala and the anterior cingulate cortex are part of our minds for a reason. The amygdala is part of our brain because the fear and disgust reflex is often a useful response to environmental hazards and threats. The anterior cingulate cortex’s desire for novelty, creativity, and impulsiveness can lead to disaster. We ignore either at our peril.

In the last installment of this discussion, we observed that some counter-factual beliefs can be positive. Believing that your romantic partner is unique, amazing, and special — even if objective evidence argues otherwise — contributes to a successful relationship.

We all do need hokum, liberal and conservative alike, and that’s not necessarily destructive as long as we combine our beliefs with self-awareness. All of our brains — liberal and conservative alike — contain both an amygdala and an anterior cingulate cortex. We all — liberal and conservative alike — have the mental equipment to challenge our own beliefs and recognize hokum when we see it, without necessarily changing our values in the process. And we all — liberal and conservative alike — have the moral obligation to do so.

There may be a Part 5, or perhaps not. Whether that’s “slovenly, ambiguous, indifferent,” or “complex, nuanced, open-minded” I leave as an exercise for the reader.

Tuesday, May 15, 2012

Why We Need Hokum (Part 3)

The third and final installment of "Why We Need Hokum." Part one is here. Part two is here.

We often assume that the human mind is designed to be rational, and failures of rationality are seen as defects in human thought. In my long study of cognitive biases and decision fallacies, that’s been a continuing theme: here’s why your mind isn’t working right.

But as it turns out, that may be the wrong way to look at it. Dr. Stephen Pinker, a Harvard professor specializing in evolutionary psychology and the computational theory of mind, argues that the process of natural selection is not concerned with the truth per se, and in many cases actually disfavors a truth-seeking mind. In an emergency, factual truth-seeking may be way too slow; a fast approximation, even if of questionable accuracy, can promote survival.

Even more importantly, non-factual or even counter-factual beliefs play an important social role. Believing that your own social group is better than other social groups helps you and your group be more successful. Believing that your romantic partner is unique, amazing, and special — even if objective evidence argues otherwise — contributes to a successful relationship.

In fact, some evolutionary psychologists argue that the role of cognitive biases and decision fallacies exist to protect your mind against challenges that would weaken your non-rational beliefs. You want the truth? You can’t handle the truth, and thanks to cognitive biases and decision fallacies, you don’t have to. Beliefs are deeply rooted in the human psyche, and it’s not an accident that they are so resistant to reason.

If a belief increases the survival potential of you and your group, on some level it’s irrelevant whether it’s actually correct. Beliefs that are clearly and immediately contra-survival (“Look, ma, I can fly!”) are evolutionarily self-correcting. It’s not logical argument and rational thinking that changes your mind, but rather the impact of your belief when it collides with the cold, hard ground. When cause and effect is less clear and less immediate, the lesson tends not to sink in.

We are not rational animals. We are thinking animals, but much of our thought isn’t rational at all. We eagerly consume self-admitted lies whenever we read fiction, and as legions of media fans can attest, the imaginary worlds we enter are often more satisfying and fulfilling than the one in which we officially live.

Hokum can expand our universe or contract it. From David Copperfield, who cheerfully admits that what he’s doing is trickery, it’s one small step to Uri Geller, who turns an otherwise unremarkable spoon-bending trick into claims of telekinesis, and not a giant leap to conclude that Area 51 is hiding a world of alien powers right under our very noses. Conspiracy theories and supernatural beliefs of all sorts give us a simple, fulfilling way to eliminate complexity and ambiguity in our lives.

Rationality, enshrined above all in the scientific method, has been under continuing and unremitting attack ever since the scientific revolutions of the 16th and 17th centuries. As soon as Copernicus dethroned the earth as center of the cosmos, the backlash began. In fighting the battle for truth, justice, and the scientific way, we’ve tended to assume that we shared a mutual goal with our opposition: a desire for truth. But that, as we've seen, is not a good assumption.

Which brings up the obvious question, pace Pilate: what is truth? Gandhi argues that truth is self-evident; Churchill argues that it is incontrovertible. But Mark Twain says it best:

“Truth is mighty and will prevail. There is nothing the matter with this, except that it ain’t so.”


Tuesday, April 17, 2012

In re Zimmerman

Stages of a Project

In project management, we frequently say that the stages of a project go like this:

1. Enthusiasm
2. Disillusionment
3. Despair
4. Panic
5. Search for the Guilty
6. Punishment of the Innocent
7. Praise for Non-Participants

The key problem comes in Step 5, “Search for the Guilty,” as opposed to the alternative, “Fix the Problem.” The overwhelming need to punish the offender outweighs the need to fix the problem.

In re Zimmerman

These days, I’ve mostly sworn off political topics. I hate that I’ve lost friends that way. But the shooting of Trayvon Martin by George Zimmerman also raises issues relevant to project management, cognitive bias, and fallacies of logic, all of which are legitimate subjects of this blog.

As is too often the case in any sensational media matter, the professional outrage machinery is in full swing. The most extreme and definite opinions on both sides receive the majority of coverage, even though they are a minority of the population, and in response, both defenders and accusers try to delegitimize the comments and opinions of those who disagree.

My goal in writing this is not to start an argument about the relative culpability of either Zimmerman or Martin — and in fact, I’d vastly prefer not to have one. There are many other places you can go to have that argument. Right now, let's look at the nature of those opinions.

In the case of Zimmerman — or, for that matter, any criminal suspect — there are four separate questions:

  1. Should Zimmerman be suspected?
  2. Should Zimmerman have been arrested?
  3. Should Zimmerman be put on trial?
  4. Should Zimmerman be convicted?
The Decision Hierarchy

Decision-making processes are normally hierarchical. There are earlier, tentative decisions we make before we reach our final conclusion. Should we, for example, do a particular project? The initial decision may be to do a feasibility study, or a pilot test. Only when those results are in will we make our final choice and commit resources to the solution.

The legal concept of level of proof is particularly meaningful. Depending on the stage of the process, different levels of proof are required to support a given decision.

Should Zimmerman be suspected? The legal standard for suspecting someone of a crime is a low bar: reasonable suspicion. This standard is based on "specific and articulable facts", "taken together with rational inferences from those facts," to distinguish it from an "unparticularized suspicion" or a hunch. In Zimmerman's case, there's no real doubt that Zimmerman shot Martin. Even Zimmerman admits as much.

In project management, a related question is whether the environment contains opportunities or problems worthy of addressing. Coming up with a target list doesn't mean you've made a final decision, so the burden of proof is low.

Should Zimmerman have been arrested? The standard for an arrest is probable cause. That standard is higher than "reasonable suspicion," but much lower than the standard required for a criminal conviction. The best-known definition of probable cause is "a reasonable belief that a person has committed a crime." Another common definition is "a reasonable amount of suspicion, supported by circumstances sufficiently strong to justify a prudent and cautious person's belief that certain facts are probably true." 

This question, not the question of whether Zimmerman is ultimately found guilty, is the core of the current controversy. Frankly, had Zimmerman been charged and tried at the time — even if he were subsequently acquitted — it’s hard to believe that this case would have ever achieved national prominence. It’s not exactly as if shootings are an uncommon occurrence in the United States.

"Probable cause" also has a role in project management. Feasibility studies aren't free, so you have to pick your shots. Establishing a proof level equivalent to "probable cause" helps you focus on the issues that matter most.

Should Zimmerman be put on trial? Prosecutors can bring charges based on no more than probable cause, but a grand jury, which reviews the case prior to trial, is supposed to make its judgment based on the preponderance of the evidence, that is, whether is charge is more likely to be true than not true. Prosecutors also have the benefit of a more comprehensive investigation than is normally performed at the time an arrest is made. The additional information may well alter an initial determination.

A full-fledged project is the equivalent of a trial, because you have to do all the work, and in the process, you may discover surprises and risks not part of the initial process. The initial project plan rests not on final determinations, but rather on the preponderance of the evidence available in the initiating and planning stages.

Should Zimmerman be convicted? Depending on the charge, two standards may apply: clear and convincing evidence, or proof beyond reasonable doubt. The first standard applies in some civil cases; the second in criminal cases. Zimmerman clearly falls into the second category. Our adversarial legal system is designed to put the strongest burden on the shoulders of those who would convict someone of a crime.

There is a final standard of proof, proof beyond a shadow of a doubt, but that is often an impossible burden to meet, so that standard doesn't apply in a criminal case. Indeed, given that there are only two people with full knowledge, and one of them is dead, the "shadow of doubt" may never be completely eliminated.

At the end of the project, the results must necessarily speak for themselves. Did you solve the issue that led to the project? Perhaps clear and convincing evidence is all you need, but sometimes a higher burden of proof rests on the project manager.

Justified and Unjustified Opinions

If there's one fundamental American right, it's the right to an opinion, but not all opinions are equal. To have a legitimate, justified opinion, you need to have some actual facts and apply an appropriate standard of proof. As long as you're clear about what stage of the decision is currently on the table, saying "yes" to one level doesn't necessarily equate to a final determination.

In an imperfect world, armed with imperfect knowledge, we cannot escape the reality that we must necessarily make some decisions and hold some opinions in advance of all the facts. And that raises the question about what makes an opinion justified and legitimate.

It's often argued that [insert side of the argument] has already convicted [Zimmerman] [Martin], rather than wait for the judicial machinery to work its course. And, of course, some extremists have already rushed to final judgment on the matter. But the vast majority of us have not.

That doesn't mean judgments, even in preliminary stages, are inappropriate. As noted, there's no reasonable doubt of the basic fact that Zimmerman shot Martin. It's hard for anyone to argue that the police should not have looked into the matter.

Can a "prudent and cautious person" reasonably conclude, at this stage of the investigation, that Zimmerman should be arrested? Clearly, the answer is yes. Even without the discredited claim of a racial slur, the 911 call, in which Zimmerman disregarded the recommendation of the dispatcher not to get out of his car, casts a "reasonable amount of suspicion" on Zimmerman's actions.

Does the preponderance of the evidence lean in favor of putting Zimmerman on trial? The Florida prosecutor assigned to the case has determined the answer to be yes, so it's not unreasonable or unjustified for an outsider to agree with it.

It's possible, and even legitimate, to have a definite opinion on the questions of arrest and trial. It's much less justified to have a definite and final opinion about Zimmerman's ultimate guilt. The standard of "reasonable doubt" has not yet been overcome, and will not be until all the evidence — including evidence presented by the defense — is out in the open.

The claim is often made that Trayvon Martin's supporters have already convicted Zimmerman, but that's an exaggeration. It's true that Martin's supporters have generally concluded that Zimmerman should be arrested and tried, but those opinions are valid at a much lower standard of proof. It's not "convicting" Zimmerman to argue that the evidence supports arrest and trial.

Bias and Judgment

When it comes to the final decision, a prudent person should be cautious. You not only need evidence, but the evidence itself often needs to be challenged and validated. But that doesn't mean you can't hold a preliminary opinion, as long as you're willing to modify it in the face of persuasive information to the contrary.

If you rate opinions about Zimmerman on a scale of 10 (certain he's guilty) to 0 (certain he's innocent), people with scores of 10 or 0 are clearly making decisions ahead of the facts. For such people, the outcome of a trial will mean nothing: if the verdict goes their way, they knew it all along, but if the verdict goes against their opinions, it will mean that the trial was rigged and thus invalid. People who are unwilling to revise their opinions in the face of new facts aren't reasonable. Fortunately, their number isn't large.

More common is people whose opinions are 8-2, strongly convinced of Zimmerman's guilt or innocence, but not so locked into their positions that persuasive contrary evidence is incapable of changing their minds. People in this category need to be extremely careful of the actions of cognitive bias in their information processing and decision process. Even if you are trying to be open-minded, once a mind's made up, inertia takes hold. Change is difficult.

People with opinions in the 4-6 range are in less danger from cognitive bias, because their opinions are inherently more tentative. In my case, I am not paying detailed attention to the story, because of the general unreliability of most of what's published at this point. That's not the same as saying I have no opinion, or that I don't lean toward one side, but I'm fully aware that the eventual factual record may not support my tentative ideas. Even so, cognitive bias can have its effects even on people of more moderate opinion, so it's important to stay on one's guard.

But that's opinions about Zimmerman's guilt. Opinions about arrest or trial fall into different standards of proof.

The Process of Proof


After watching innumerable crime shows, we all should have a pretty good idea how the police and trial process is supposed to work. When it goes according to plan, it does a reasonably good job of establishing a factual record in support of a decision. Of course, your mileage may vary.

Even if the story seems straightforward at first glance, investigators still go through the process of reconstructing the story, gathering physical evidence, and taking initial testimony. Raw evidence, of course, is of little use until it’s processed — the body examined, witnesses interviewed in detail, a timeline reconstructed, DNA tests performed, etc. In processing evidence, investigators create a story, a timeline of events and people that ideally reveals the truth of a situation.

Stories, of course, always begin as outlines, and as they take shape and form, you can fill in greater detail. Sometimes, stories surprise you, and you find yourself in an unexpected place. Minor characters (“persons of interest”) become suspects — people with motive, means, and opportunity. Some suspects are ruled out as the process moves forward; other suspects are eventually charged with a crime.

But all that assumes police and prosecutors are doing their jobs properly. To me, the most important question is not whether George Zimmerman committed second degree murder in the shooting of Trayvon Martin, but whether he should have been arrested in the first place.

What's the Real Problem?

Questions about the process will probably not be part of the trial, because it’s Zimmerman, not the police department, who is its subject. One hopes that the Justice Department investigation will address these matters. Again, I don’t claim to know the right answers. But as a management consultant, my job is to ask the right questions. Here are some things I'd like to know.

First, was the behavior of the Sanford Police Department appropriate? To measure "appropriate," consider the following: Did the department follow established protocols? Were those protocols adequate to the situation? Did any special circumstances make it more difficult to follow the protocols? What can we do about that in the future?

If protocols were not followed, why not? Are there management or organizational issues, problems with internal culture, changes in the environment, or other factors? And if so, what can we do to address those issues? Even if the more serious allegations of political interference by Zimmerman’s father (a retired Virginia magistrate) or charges of institutional racism turn out to be true, the reflexive “search for the guilty” is a much less effective response than fix the problem.

Wasting a Crisis

I have a stronger opinion about process in this case than I have about guilt. One of the facts of management consulting is that if you have a really bad outcome, you need to adjust your process as necessary to keep it from happening again.

The Sanford police were, no doubt, shocked at the public response and degree of national interest. Regardless of the assignment of fault or blame, they have to adapt to the new reality that their work will receive increasing scrutiny in the future. The 1982 Chicago Tylenol murders were clearly not the fault of the manufacturer, but the ubiquitous tamper-evident seals on all that we eat or drink date from that crime. And if they truly are at fault, then there’s all the more reason to work harder and better in the future.

Too much emphasis on determining guilt can, unfortunately, detract from the more important matter of change. In the cognitive biases series, I covered my version of the “Semmelweis Effect," the reality that peoples’ views harden when you accuse them of terrible crimes. Moral indignation can backfire, and that does no one any good.

Because it’s so easy to identify the most inflammatory and outrageous pieces on either side, it’s equally easy to miss the large amount of reasonable, proportionate commentary—also on both sides. A crisis, as has been noted, is a terrible thing to waste. That doesn’t mean we want a crisis or welcome it when it comes, but if we waste the crisis, we’re often doomed to experience it again. That’s the worst possible outcome.

It’s clear that something went terribly wrong in the Trayvon Martin case. Less clear is what went wrong, and the purpose of trial and investigation is to establish the narrative in an authoritative manner. Whether the trial and investigation accomplish that goal remains to be seen.

But it’s truly the most important part of the matter. Because once we answer that question, we can move forward to the two questions that matter in the long run:

Why did it happen?

And how can we keep it from happening again?

Tuesday, November 22, 2011

Wikipedia Mon Amour

The Problem With Wikipedia (xkcd), Randall Munroe
http://xkcd.com/214/

My late uncle Jack Killheffer was science editor of the Encyclopedia Britannica in the 1970s and 1980s. I remember his office in downtown Chicago as a sea of papers. Piles of documents filled most of the floor space. I’ve never been a clean desk person, but his desk was an archeological dig. He was a chain smoker; his ashtray was the size of a small dinner plate and resembled a fireplace that hadn’t been cleaned regularly.

I thought he had the world’s coolest job.

I’ve always liked encyclopedias. Before Uncle Jack joined Britannica, we had an Encyclopedia Americana. I browsed through the volumes randomly throughout my childhood. I seldom forget what I read; I can still trot out all sorts of odd information gleaned randomly from encyclopedias.

The oldest surviving encyclopedia is Pliny the Elder’s Naturalis Historia. He hadn’t quite finished proofing it when he died in the eruption of Vesuvius in 79 AD. Given the Greek root of the word (ἐγκύκλιος παιδεία, “general education”), it’s clear his was not the first, and he certainly wasn’t the last. De Nuptiis Mercurii et Philologiae ("The Wedding of Mercury and Philologia"), first of the medieval encyclopedias, came out in either the 4th or 5th centuries.

The Arabic renaissance produced the Encyclopedia of the Bretheren of Purity; the Chinese in the 11th century released the Four Great Books of Song (Book 4, The Prime Tortoise of the Record Bureau, contained 9.4 million Chinese characters in 1,000 volumes).

The invention of printing triggered an explosion, including Chambers' Cyclopaedia, or Universal Dictionary of Arts and Sciences (1728), and the Encyclopédie of Diderot and D'Alembert (1751). Of the great encyclopedias of the 18th century, the Britannica is the oldest survivor, dating to 1768.

I’m proud to note that the first encyclopedia published in the United States was Dobson’s Encyclopedia (1788-1797), published by Philadelphia printer Thomas Dobson (no relation, alas). It was, for the most part, a rip-off of the 3rd edition of the Britannica, with various adjustments made to correct a British bias. Washington, Jefferson, Burr, and Hamilton all owned copies.

Although encyclopedias have a reputation for being objective, thorough, and reliable, accusations (some solid, some less so) of unfairness and inaccuracy have been leveled at just about all of them at one time or another. That’s unsurprising. In our long discussion of cognitive biases both here and here, we learned that misinformation and misperception were fundamental parts of the human condition.

Uncle Jack told me stories about the sensitive political negotiations that took place in pure science entries. People are passionate about facts and interpretations, and both are subject to argument. When I worked for the National Air and Space Museum (NASM), there were similar issues. The Smithsonian’s official position on the relative achievements of Wilbur and Orville Wright versus Smithsonian secretary and pioneer aviation figure Samuel Pierpont Langley is shaped in part by the terms of a contract between Orville Wright and the Smithsonian that contains the following:
"Neither the Smithsonian Institution or its successors, nor any museum or other agency, bureau or facilities administered for the United States of America by the Smithsonian Institution or its successors shall publish or permit to be displayed a statement or label in connection with or in respect of any aircraft model or design of earlier date than the Wright Aeroplane of 1903, claiming in effect that such aircraft was capable of carrying a man under its own power in controlled flight."
There’s been some minor controversy over the years as to whether NASM is unfairly taking sides, but the curators I knew assured me that as far as they were concerned, the facts of the matter lined up just fine with the language of the contract. The Langley claims should have been repudiated. The Wrights were first to fly.

Working in original-source history helped me develop a sense of how much of what we know is the result of a messy and imperfect process. We muddle our way to knowledge, and there’s nothing inherently wrong with that. We don’t have too many other options. Our knowledge not only isn’t perfect, it can’t be.

For that reason, I’ve often felt that the criticisms leveled against Wikipedia for inaccuracy and bias are excessive — though that doesn’t mean they’re wrong. All encyclopedias are crowdsourced; Wikipedia differs only in that the crowd isn’t paid. Well, not in money, anyway. There are many rewards in writing for an encyclopedia, not least the imprimatur of authenticity and accuracy conveyed by the brand name.

We all know that Wikipedia doesn’t count as a final authority — if you can’t confirm what you say from a more reliable source, no one will take you seriously. But for quick reference, an overview, a list of sources, and some basic preliminary data, it’s unbeatable. I probably use it 8-10 times every single day — four times for this article alone. If my factual need is trivial and extreme accuracy not necessary, that’s enough. When I need more, I dig deeper.

That’s true not only of Wikipedia, of course, but of any other source of information. All data — and even moreso, its interpretation — is suspect. Only by consulting multiple sources and striving for self-awareness of one’s own cognitive biases is it possible to arrive at some reasonable approximation of truth.

Wikipedia culture deservedly throws suspicion on its contributors. A neutral point of view is essential, but Wikipedia frowns most heavily on people receiving money for contributing to Wikipedia articles, ignoring many other sources of bias. There’s a large community of Wikipedia editors-for-hire (I know several of them myself), but the Wikipedia culture forces them to hide their conflicts rather than share them. Wikipedia vigilantes have been known to vandalize pages when a contributor is accused of taking money, but that doesn’t correct the problem, it makes it worse.

Bias is unavoidable, but the best cure for bias is sunlight. Wikipedia is large enough and important enough that it’s legitimate for people to earn money from it. There’s nothing new here; encyclopedia contributors pre-Wikipedia expected remuneration as a matter of course. It takes significant time, effort, and work to write and edit a good article. Volunteerism, as wonderful as it is, can only take you so far.

It’s important for bias and conflict of interest to be revealed, but not to be punished. The process of peer review and vigorous debate should be aimed not at expelling the biased, but rather toward greater accuracy, completeness, and consideration of all points of view.

Wikipedia is running one of its periodic fundraising campaigns now, and in the same way I contribute to public radio, I usually contribute to Wikipedia; I use it enough. I urge you to do the same. At the same time, I tend to think Wikipedia would do well to consider running Google-style ads; when done correctly, they add value to the search rather than corrupt it.

There’s nothing wrong with making money in the encyclopedia business. The most important thing is to get the information right.

Tuesday, January 25, 2011

An Index of Cognitive Biases

Here's an index to all the installments of Cognitive Biases. Click on any "Part" name to go directly to that installment. You can also find the bias you’re interested in by clicking in the tag cloud on the right. To find all posts concerning cognitive biases, click the very big phrase.

Part 1 — Bias blind spot, confirmation bias, déformation professionnelle, denomination effect, moral credential effect.

Part 2 — Base rate fallacy, congruence bias, experimenter’s bias

Part 3 — Ambiguity aversion effect (Ellsberg paradox), choice-supportive bias, distinction bias, contrast effect

Part 4 — Actor-observer bias, anchoring effect, attentional bias, availability cascade, belief bias

Part 5 — Clustering illusion, conjunction fallacy, cryptomnesia

Part 6 — Disposition effect, egocentric bias, endowment effect, extraordinarity bias

Part 7 — False consensus effect, false memory, Forer effect, framing, fundamental attribution error

Part 8 — Gambler’s fallacy, halo effect

Part 9 — Hawthorne effect, herd instinct, hindsight bias, hyperbolic discounting

Part 10 — Illusion of asymmetric insight, illusion of control, illusory superiority, impact bias, information bias, ingroup bias, irrational escalation

Part 11 — Just-world phenomenon, loss aversion, ludic fallacy, mere exposure effect, money illusion

Part 12 — Need for closure, neglect of probability, “not-invented-here” (NIH) syndrome, notational bias

Part 13 — Observer-expectancy effect, omission bias, optimism bias, ostrich effect, outgroup homogeneity bias, overconfidence effect

Part 14 — Pareidolia, planning fallacy, post-purchase rationalization

Part 15 — Projection bias, pseudocertainty effect, publication bias

Part 16 — Reactance, reminiscence bump, restraint bias, rosy retrospection

Part 17 — Selection bias, selective perception, self-fulfilling prophecy

Part 18 — Self-serving bias, Semmelweis reflex, serial position effect

Part 19 — Status quo bias, stereotyping, subadditivity effect

Part 20 — Subjective validation, suggestibility, system justification theory

Part 21 — Telescoping effect, Texas sharpshooter fallacy, trait ascription bias

Part 22 — Ultimate attribution error, valence effect, von Restorff effect, wishful thinking, zero-risk bias

Tuesday, January 18, 2011

A Credit To His Race (The final installment of Cognitive Biases)

At long last, we reach the end of our series on Cognitive Biases. In this installment, we'll study the ultimate attribution error, the valence effect, the von Restorff effect, wishful thinking, and the zero-risk bias.


Ultimate Attribution Error

A phrase I used to hear from time to time in my Alabama days was, “He’s a credit to his race.” It was never used to refer to a white person, of course, but only to blacks. On the surface, it appears to be a compliment, but it’s an example of the ultimate attribution error.

In the ultimate attribution error, people view negative behaviors on the part of members of an outgroup as a normal trait, and positive behavior as exceptions to the norm. It relates to the fundamental attribution error, in which we explain our own behavior as reactions to situations and other peoples’ behavior as a matter of basic character, and clearly relates to stereotyping. Ultimate attribution error is one of the basic mechanisms of prejudice.

Valence Effect

In psychology, valence refers to the positive or negative emotional charge of a given event or circumstance. The valence effect is a probability bias in which people overestimate the likelihood of something good rather than something bad: it’s the basic mechanism that stimulates the sale of lottery tickets.

There are numerous studies that demonstrate the valence effect. In one study, people assigned a higher probability of picking a card with a smiling face than one with a frowning face in a random deck.

The valence effect can be considered as wishful thinking, but it’s been shown in some case that belief in a positive outcome can increase the odds of achieving it — you may work harder or refuse to give up as early.

Von Restorff Effect

First identified by Dr. Hedwig von Restorff in 1933, this bias (also called the isolation effect) predicts that an item that "stands out like a sore thumb" (called distinctive encoding) is more likely to be remembered than other items. For instance, if a person examines a shopping list with one item highlighted in bright green, he or she will be more likely to remember the highlighted item than any of the others.

Wishful Thinking

This popular cognitive bias involves forming beliefs and making decisions based on your imagination rather than evidence, rationality, or reality. All else being equal, the valence effect holds: people predict positive outcomes are more likely than negative ones.

There is also reverse wishful thinking, in which someone assumes that because it’s bad it’s more likely to happen: Murphy’s Law as cognitive bias.

Wishful thinking isn’t just a cognitive bias, but a logical fallacy: I wish that P would be true/false; therefore, P is true/false. It’s related to two other fallacies that are reciprocals of one another: negative proof and argument from ignorance.

In negative proof, the absence of certainty on one end of the argument is taken as proof of the opposite end: climate scientists cannot say with 100% certainty that their claims about global warming are true, therefore, they must be false. The reciprocal fallacy is known as the argument from ignorance: no one can be sure that there is no God; therefore, there is a God.

Zero-Risk Bias

Since 2000, terrorists attacks against the United States or Americans abroad have killed about 3,250 people, the vast majority of them on 9/11. Your odds of being a victim are about one in ten million.

The Transportation Security Administration consumes $5.6 billion a year. Its job is to reduce the chance of terrorist attacks on transportation infrastructure, primarily air, to zero. Let’s assume that they are completely effective in their mission. If so, the cost per life saved is $1.7 million.

Perhaps that’s a completely reasonable price to pay to save a human life. However, from a logical point of view, you have to consider what else $5.6 billion might accomplish. Over a ten-year period, about 420,000 people die in car accidents. If $5.6 billion would eliminate 100% of the risk of aviation terrorist deaths, or 10% of the risk of car accident deaths, which risk would you chose to attack?

Common sense argues for a 10% reduction in car accidents, but the zero-risk bias argues the opposite: it’s the preference for completely eliminating a risk (even if small) to reducing a larger risk. It values certainty over residual risk.

There are other arguments that can be made in support of anti-terrorist activities, but the zero-risk bias is also operational here, and it leads to faulty decisions.


______________________________________________



With this installment, our long march through the wilds of cognitive bias comes to an end. I deeply appreciate the many insightful comments you’ve provided.


And now for something completely different…

Tuesday, January 11, 2011

Did You Hear the One About the Texas Sharpshooter? (Part 21 of Cognitive Biases)

In this installment of Cognitive Biases, we'll learn why your memories get unstuck in time, why establishing hypotheses backward is a fallacy, and why we think other people always behave the same way.


Telescoping Effect

The telescoping effect is a memory bias, first documented in a 1964 article in the Journal of the American Statistical Association. People tend to perceive recent events as being more remote in time than they are (backward telescoping) and more remote events as being more recent than they are. The Galton-Crovitz test measures the effect; you can take the test here.

Texas Sharpshooter Fallacy

A sales manager I once knew had an infallible sense of what was going to sell. Because he didn’t want to waste his time, he put all his emphasis on selling what he knew would sell, and didn’t bother pushing the stuff that wouldn’t sell anyway.

This is an example of the Texas sharpshooter fallacy. The Texas sharpshooter, you see, fired a bunch of shots at the side of the barn, went over and found a cluster of hits, and drew a bullseye over them. When you don’t establish your hypothesis first and test it second, your conclusion is suspect.

This was first described in the field of epidemiology. For example, the number of cases of disease D in city C is greater than would be expected by chance. City C has a factory that has released amounts of chemical agent A into the environment. Therefore, agent A causes disease D.

Not so fast.

The cluster may be the result of chance, or there may be another cause. Now, if you conclude that agent A should be tested as a possible trigger of disease D, that’s a reasonable inference.

Finding a Nostradamus prophecy that could arguably relate to a big event in history is another example. Here’s a famous prophecy that appears to predict Hitler:

Beasts wild with hunger will cross the rivers,

The greater part of the battle will be against Hister.

He will cause great men to be dragged in a cage of iron,
When the son of Germany obeys no law.

But out of a thousand prophecies, what are the odds that none of them will relate to a real event?


Trait Ascription Bias

Trait ascription bias is the tendency for people to view themselves as relatively variable in terms of personality, behavior and mood while viewing others as much more predictable in their personal traits across different situations. This may be because our own internal states are much more observable and available to us than those of others. A similar bias on the group level is called the outgroup homogeneity bias.

The degree to which we fall into this bias often depends on how well we know the other person, but not entirely. “You always behave like that” is an accusation most of us have leveled at a loved one at some time in our lives.


More next week.




Previous Installments

You can find the bias you’re interested in by clicking in the tag cloud on the right. To find all posts concerning cognitive biases, click the very big phrase.

Part 1 — Bias blind spot, confirmation bias, déformation professionnelle, denomination effect, moral credential effect.

Part 2 — Base rate fallacy, congruence bias, experimenter’s bias

Part 3 — Ambiguity aversion effect (Ellsberg paradox), choice-supportive bias, distinction bias, contrast effect

Part 4 — Actor-observer bias, anchoring effect, attentional bias, availability cascade, belief bias

Part 5 — Clustering illusion, conjunction fallacy, cryptomnesia

Part 6 — Disposition effect, egocentric bias, endowment effect, extraordinarity bias

Part 7 — False consensus effect, false memory, Forer effect, framing, fundamental attribution error

Part 8 — Gambler’s fallacy, halo effect

Part 9 — Hawthorne effect, herd instinct, hindsight bias, hyperbolic discounting

Part 10 — Illusion of asymmetric insight, illusion of control, illusory superiority, impact bias, information bias, ingroup bias, irrational escalation

Part 11 — Just-world phenomenon, loss aversion, ludic fallacy, mere exposure effect, money illusion

Part 12 — Need for closure, neglect of probability, “not-invented-here” (NIH) syndrome, notational bias

Part 13 — Observer-expectancy effect, omission bias, optimism bias, ostrich effect, outgroup homogeneity bias, overconfidence effect

Part 14 — Pareidolia, planning fallacy, post-purchase rationalization

Part 15 — Projection bias, pseudocertainty effect, publication bias

Part 16 — Reactance, reminiscence bump, restraint bias, rosy retrospection

Part 17 — Selection bias, selective perception, self-fulfilling prophecy

Part 18 — Self-serving bias, Semmelweis reflex, serial position effect

Part 19 — Status quo bias, stereotyping, subadditivity effect

Part 20 — Subjective validation, suggestibility, system justification theory

Tuesday, January 4, 2011

That Psychic Was So *Accurate*! (Part 20 of Cognitive Biases)

In our 20th installment of Cognitive Biases, we cover subjective validation, the tendency to think a statement is true if it means something to us; suggestibility, the extent to which we accept or act on the suggestions of others; and system justification theory, the cognitive bias of patriotism.


Subjective Validation

Subjective validation, also known as the personal validation effect, is the tendency to consider a statement correct if it’s meaningful to the listener. It’s related to the Forer effect and validated by confirmation bias, and it’s the basic technique that reinforces belief in paranormal phenomena. The listener focuses on and remembers the accurate statements and forgets or ignores the inaccurate ones, forming an impression of the psychic’s success that is wildly inflated.

Say anything, and it’s possible to find meaning in it. “I sense a father figure trying to contact you from the spirit world,” becomes validated if there’s anyone in the subject’s life that can be made to qualify. “I hear the phrase ‘broken wheel,’” the psychic says, and of all the thousands of possible associations, the subject finds one with personal meaning, and the psychic is validated.

What if the phrase ‘broken wheel’ evokes no associations? Then the psychic says, “I hear the name ‘Charles,’” and so forth until there’s a winner. Selective memory comes into play as well, so the subject doesn’t remember the ‘broken wheel’ figure, but remembers the ‘Charles’ association vividly.

The strength of the effect depends less on the skill of the psychic, of course, and much more on the level of desire of the subject. If we want to believe, we’ll find the evidence we need.

Suggestibility

You are suggestible to the extent you are inclined to accept or act on the suggestions of others. Some people are naturally more suggestible than others, of course, but suggestibility in individuals is varied. Intense emotions, current level of self-esteem or assertiveness, and age play a role.
The nature of suggestibility plays a big role in hypnosis. There are three different types of suggestibility, according to Dr. John Kappas.


  • Emotional Suggestibility. A suggestible behavior characterized by a high degree of responsiveness to inferred suggestions that affect emotions and restrict physical body responses; usually associated with hypnoidal depth. Thus the emotional suggestible learns more by inference than by direct, literal suggestions.
  • Physical Suggestibility. A suggestible behavior characterized by a high degree of responsiveness to literal suggestions affecting the body, and restriction of emotional responses; usually associated with cataleptic stages or deeper.
  • Intellectual Suggestibility. The type of hypnotic suggestibility in which a subject fears being controlled by the operator and is constantly trying to analyze, reject or rationalize everything the operator says. With this type of subject the operator must give logical explanations for every suggestion and must allow the subject to feel that he is doing the hypnotizing himself.


With all of that, there’s surprisingly little consensus on what suggestibility is and how it works. Is it a function of character, a learned habit, a function of language acquisition and empathy, a biased term used to provoke people to greater resistance, or something else?

Common examples of suggestible behavior in everyday life include "contagious yawning" (multiple people begin to yawn after observing a person yawning) and the medical student syndrome (a person begins to experience symptoms of an illness after reading or hearing about it).

Placebo response may also be based on individual differences in suggestibility, at least in part. Suggestible persons may be more responsive to various forms of alternative health practices that seem to rely upon patient belief in the intervention. People who are highly suggestible may be prone to making poor judgments because they did not process suggestions critically and falling pray to emotion-based advertising.


System Justification Theory

System justification theory (SJT) is a scientific theory within social psychology that proposes people have a motivation to defend and bolster the status quo, that is, to see it as good, legitimate, and desirable.

According to system justification theory, people not only want to hold favorable attitudes about themselves (ego-justification) and their own groups (group-justification), but they also want to hold favorable attitudes about the overarching social order (system-justification). A consequence of this tendency is that existing social, economic, and political arrangements tend to be preferred, and alternatives to the status quo are disparaged.

Early SJT research focused on compensatory stereotypes. Experiments suggested that the widespread endorsement of stereotypes such as "poor but happy" or "rich but miserable" exist to balance out the gap between those of low and high socioeconomic status.,Later work suggested that these compensatory stereotypes are preferred by those on the left while people on the right prefer non-complimentary stereotypes such as "poor and dishonest" or "rich and honest", which rationalize inequality rather than compensate for it.

According to system justification theory, this motive is not unique to members of dominant groups, who benefit the most from the current regime; it also affects the thoughts and behaviors of members of groups who are seemingly incurring disadvantages by it (e.g., poor people, racial/ethnic minorities). System justification theory therefore accounts for counter-intuitive evidence that members of disadvantaged groups often support the societal status quo (at least to some degree), often at considerable cost to themselves and to fellow group members.

System justification theory differs from the status quo bias in that it is predominately motivational rather than cognitive. Generally, the status quo bias refers to a tendency to prefer the default or established option when making choices. In contrast, system justification posits that people need and want to see prevailing social systems as fair and just. The motivational component of system justification means that its effects are exacerbated when people are under psychological threat or when they feel their outcomes are especially dependent on the system that is being justified.



More next week.

To read the whole series, click "Cognitive bias" in the tag cloud to your right, or search for any individual bias the same way.

Tuesday, December 28, 2010

When 1+1=3 (Part 19 of Cognitive Biases)

Our 19th installment of Cognitive Biases covers the status quo bias, stereotyping, and the subadditivity effect.


Status Quo Bias

Sigmund Freud suggested that there were only two reasons people changed: pain and pressure. Evidence for the status quo bias, a preference not to change established behavior (even if negative) unless the incentive to change is overwhelming, comes from many fields, including political science and economics.

Another way to look at the status quo bias is inertia: the tendency of objects at rest to remain at rest until acted upon by an outside force. The corollary, that objects once in motion tend to stay in motion until acted upon by an outside force, gives hope for change. Unfortunately, one of those outside forces is friction, which is as easy to see in human affairs as it is in the rest of the material universe.

Daniel Kahneman (this time without Amos Tversky) has created experiments that can produce status quo bias effects reliably. It seems to be a combination of loss aversion and the endowment effect, both described elsewhere.

The status quo bias should be distinguished from a rational preference for the status quo in any particular incident. Change is not in itself always good.

Stereotyping

A stereotype, strictly speaking, is a commonly held popular belief about a specific social group or type of individual. It’s not identical to prejudice:


  • Prejudices are abstract-general preconceptions or abstract-general attitudes towards any type of situation, object, or person.
  • Stereotypes are generalizations of existing characteristics that reduce complexity.


The word stereotype originally comes from printing: a duplicate impression of an original typographic element used for printing instead of the original. (A cliché, interestingly, is the technical term for the printing surface of a stereotype.) It was journalist Walter Lippmann who first used the word in its modern interpersonal sense. A stereotype is a “picture in our heads,“ he wrote, “whether right or wrong.“

Mental categorizing and labeling is both necessary and inescapable. Automatic stereotyping is natural; the necessary (but often omitted) follow-up is to make a conscious check to adjust the impression.

A number of theories have been derived from sociological studies of stereotyping and prejudicial thinking. In early studies it was believed that stereotypes were only used by rigid, repressed, and authoritarian people. Sociologists concluded that this was a result of conflict, poor parenting, and inadequate mental and emotional development. This idea has been overturned; more recent studies have concluded that stereotypes are commonplace.

One theory as to why people stereotype is that it is too difficult to take in all of the complexities of other people as individuals. Even though stereotyping is inexact, it is an efficient way to mentally organize large blocks of information. Categorization is an essential human capability because it enables us to simplify, predict, and organize our world. Once one has sorted and organized everyone into tidy categories, there is a human tendency to avoid processing new or unexpected information about each individual. Assigning general group characteristics to members of that group saves time and satisfies the need to predict the social world in a general sense.

Another theory is that people stereotype because of the need to feel good about oneself. Stereotypes protect one from anxiety and enhance self-esteem. By designating one's own group as the standard or normal group and assigning others to groups considered inferior or abnormal, it provides one with a sense of worth, and in that sense, stereotyping is related to the ingroup bias.

Subadditivity Effect

The subadditivity effect is the tendency to judge probability of the whole to be less than the probabilities of the parts.

For instance, subjects in one experiment judged the probability of death from cancer in the United States was 18%, the probability from heart attack was 22%, and the probability of death from "other natural causes" was 33%. Other participants judged the probability of death from a natural cause was 58%. Natural causes are made up of precisely cancer, heart attack, and "other natural causes," however, the sum of the latter three probabilities is 73%. According to Tversky and Koehler in a 1994 study, this kind of result is observed consistently.

The subadditivity effect is related to other math-oriented cognitive biases, including the denomination effect, the base rate fallacy, and especially the conjunction fallacy.


More next week.

To read the whole series, click "Cognitive bias" in the tag cloud to your right, or search for any individual bias the same way.

Tuesday, December 21, 2010

Rashomon Reality (Part 18 of Cognitive Biases)

Our 18th installment of Cognitive Biases covers the self-serving bias, offers a new interpretation of the Semmelweis reflex, and looks at the two sides of the serial position effect.

Self-Serving Bias

A self-serving bias occurs when people attribute their successes to internal or personal factors but attribute their failures to situational factors beyond their control: to take credit for success but to shift the blame for failure. It also occurs when we are presented with ambiguous information and evaluate it in the way that best suits our own interest.

Several reasons have been proposed to explain the occurrence of self-serving bias: maintaining self-esteem, making a good impression, or sometimes that we’re aware of factors outsiders might miss.

The bias has been demonstrated in many areas. For example, victims of serious occupational accidents tend to attribute their accidents to external factors, whereas their coworkers and management tend to attribute the accidents to the victims' own actions.

When the self-serving bias causes people to see Rashomon reality, the ability to negotiate can be dramatically impaired. One of the parties may see the other side as bluffing or completely unwilling to be reasonable, based on the self-serving interpretation of the ambiguous evidence.

In one experiment, subjects played the role of either the plaintiff or defendant in a hypothetical car accident case with a maximum potential damages payment of $100,000. The experiment used real money at the rate of $1 real = $10,000 experiment.

They then tried to settle in a fixed amount of time, and if they failed, the settlement amount would be charged a hefty legal bill. On average, plaintiffs thought the likely award would be $14,500 higher than the defendants. The further away the perceived “fair” figures were from each other strongly correlated with whether they could reach an agreement in time.

The self-serving bias, interestingly, seems not to exist in our struggles with personal computers. When we can’t get them to work, we blame ourselves rather than the technology. The reason is that people are so used to bad functionality, counterintuitive features, bugs, and sudden crashes of most contemporary software applications that they tend not to complain about them. Instead, they believe it is their personal responsibility to predict possible issues and to find solutions to computer problems. This unique phenomenon has been recently observed in several human-computer interaction investigations.


Semmelweis Reflex

Dr. Ignatz Semmelweis, assistant to the head of obstetrics at the Vienna General Hospital in the 1840s, discovered that his clinic, where doctors were trained, had a maternal mortality rate from puerperal fever (childbed fever) that averaged 10 percent. A second clinic, which trained midwives, had a mortality rate of only four percent.

This was well known outside the hospital. Semmelweis described women begging on their knees to go to the midwives clinic rather than risk the care of doctors. This, Semmelweis said, “made me so miserable that life seemed worthless.” Semmelweis started a systematic analysis to find out the cause, ruling out overcrowding, climate, and other factors before the death of an old friend from a condition similar to puerperal fever after being accidentally cut with a student’s scalpel during an autopsy.

Semmelweis imagined that some sort of “cadaverous particles” might be responsible, germs being at that time unknown. Midwives, after all, didn’t perform autopsies. Accordingly, Semmelweis required doctors to wash their hands in a mild bleach solution after performing autopsies. Following the change in procedures, death rates in the doctors clinic dropped almost immediately to the levels of the midwives clinic.

This theory contradicted medical belief of the time, and Semmelweis eventually was disgraced, lost his job, began accusing his fellow physicians of murder, and eventually died in a mental institution, possibly after being beaten by a guard.

Hence the Semmelweis effect: normally described as a reflex-like rejection of new knowledge because it contradicts entrenched norms, beliefs or paradigms: the “automatic rejection of the obvious, without thought, inspection, or experiment.”

Some credit Robert Anton Wilson for the phrase. Timothy Leary defined it as, “Mob behavior found among primates and larval hominids on undeveloped planets, in which a discovery of important scientific fact is punished.”

I don’t agree. I think there’s something else going on here.

The Semmelweis effect, I think, relates more to the implied threat and criticism the new knowledge has for old behavior. Let’s go back to Semmelweis’ original discovery. If his hypothesis about hand washing is correct, it means that physicians have contributed to the deaths of thousands of patients. Who wants to think of himself or herself as a killer, however inadvertent?


The Semmelweis reflex is, I think, better stated as the human tendency to reject or challenge scientific or other factual information that portrays us in a negative light. In that sense, it’s related to the phenomenon of reactance, discussed earlier.

In this case, Semmelweis’s own reaction to discovering the mortality rate of his clinic might have been a tip-off. He was “so miserable that life seemed worthless.” In his own case, this drove him to perform research, but these other doctors can only accept or deny the results. It’s not unreasonable to expect a certain amount of hostile response, and calling people “murderers,” as Semmelweis did, is hardly likely to win friends and influence people.

You don’t have to look far to find contemporary illustrations, from tobacco executives aghast someone dared accuse them of making a faulty product to the notorious Ford Motor Company indifference to safety in designing the Ford Pinto. The people involved weren’t trying to be unethical or immoral; they were in the grips of denial triggered by the Semmelweis reflex. This denial was strong enough to make them ignore or trivialize evidence that in retrospect appears conclusive.

When you’re accused of fault, watch for the Semmelweis reflex in yourself. The natural first impulse is to deny or deflect, but the right practice is to examine and explore. Depending on what you find, you can select a more reasoned strategy.



Serial Position Effect


The serial position effect, coined by Hermann Ebbinghaus, refers to the finding that recall accuracy varies as a function of an item's position within a study list. When asked to recall a list of items in any order (free recall), people tend to begin recall with the end of the list, recalling those items best (the recency effect). Among earlier list items, the first few items are recalled more frequently than the middle items (the primacy effect).

One suggested reason for the primacy effect is that the initial items presented are most effectively stored in long-term memory because of the greater amount of processing devoted to them. (The first list item can be rehearsed by itself; the second must be rehearsed along with the first, the third along with the first and second, and so on.) One suggested reason for the recency effect is that these items are still present in working memory when recall is solicited. Items that benefit from neither (the middle items) are recalled most poorly.

There is experimental support for these explanations. For example:


  • The primacy effect (but not the recency effect) is reduced when items are presented quickly and is enhanced when presented slowly (factors that reduce and enhance processing of each item and thus permanent storage).
  • The recency effect (but not the primacy effect) is reduced when an interfering task is given; for example, subjects may be asked to compute a math problem in their heads prior to recalling list items; this task requires working memory and interferes with any list items being attended to.
  • Amnesiacs with poor ability to form permanent long-term memories do not show a primacy effect, but do show a recency effect.


More next week.

To read the whole series, click "Cognitive bias" in the tag cloud to your right, or search for any individual bias the same way.