Tuesday, December 28, 2010

When 1+1=3 (Part 19 of Cognitive Biases)

Our 19th installment of Cognitive Biases covers the status quo bias, stereotyping, and the subadditivity effect.


Status Quo Bias

Sigmund Freud suggested that there were only two reasons people changed: pain and pressure. Evidence for the status quo bias, a preference not to change established behavior (even if negative) unless the incentive to change is overwhelming, comes from many fields, including political science and economics.

Another way to look at the status quo bias is inertia: the tendency of objects at rest to remain at rest until acted upon by an outside force. The corollary, that objects once in motion tend to stay in motion until acted upon by an outside force, gives hope for change. Unfortunately, one of those outside forces is friction, which is as easy to see in human affairs as it is in the rest of the material universe.

Daniel Kahneman (this time without Amos Tversky) has created experiments that can produce status quo bias effects reliably. It seems to be a combination of loss aversion and the endowment effect, both described elsewhere.

The status quo bias should be distinguished from a rational preference for the status quo in any particular incident. Change is not in itself always good.

Stereotyping

A stereotype, strictly speaking, is a commonly held popular belief about a specific social group or type of individual. It’s not identical to prejudice:


  • Prejudices are abstract-general preconceptions or abstract-general attitudes towards any type of situation, object, or person.
  • Stereotypes are generalizations of existing characteristics that reduce complexity.


The word stereotype originally comes from printing: a duplicate impression of an original typographic element used for printing instead of the original. (A cliché, interestingly, is the technical term for the printing surface of a stereotype.) It was journalist Walter Lippmann who first used the word in its modern interpersonal sense. A stereotype is a “picture in our heads,“ he wrote, “whether right or wrong.“

Mental categorizing and labeling is both necessary and inescapable. Automatic stereotyping is natural; the necessary (but often omitted) follow-up is to make a conscious check to adjust the impression.

A number of theories have been derived from sociological studies of stereotyping and prejudicial thinking. In early studies it was believed that stereotypes were only used by rigid, repressed, and authoritarian people. Sociologists concluded that this was a result of conflict, poor parenting, and inadequate mental and emotional development. This idea has been overturned; more recent studies have concluded that stereotypes are commonplace.

One theory as to why people stereotype is that it is too difficult to take in all of the complexities of other people as individuals. Even though stereotyping is inexact, it is an efficient way to mentally organize large blocks of information. Categorization is an essential human capability because it enables us to simplify, predict, and organize our world. Once one has sorted and organized everyone into tidy categories, there is a human tendency to avoid processing new or unexpected information about each individual. Assigning general group characteristics to members of that group saves time and satisfies the need to predict the social world in a general sense.

Another theory is that people stereotype because of the need to feel good about oneself. Stereotypes protect one from anxiety and enhance self-esteem. By designating one's own group as the standard or normal group and assigning others to groups considered inferior or abnormal, it provides one with a sense of worth, and in that sense, stereotyping is related to the ingroup bias.

Subadditivity Effect

The subadditivity effect is the tendency to judge probability of the whole to be less than the probabilities of the parts.

For instance, subjects in one experiment judged the probability of death from cancer in the United States was 18%, the probability from heart attack was 22%, and the probability of death from "other natural causes" was 33%. Other participants judged the probability of death from a natural cause was 58%. Natural causes are made up of precisely cancer, heart attack, and "other natural causes," however, the sum of the latter three probabilities is 73%. According to Tversky and Koehler in a 1994 study, this kind of result is observed consistently.

The subadditivity effect is related to other math-oriented cognitive biases, including the denomination effect, the base rate fallacy, and especially the conjunction fallacy.


More next week.

To read the whole series, click "Cognitive bias" in the tag cloud to your right, or search for any individual bias the same way.

Tuesday, December 21, 2010

Rashomon Reality (Part 18 of Cognitive Biases)

Our 18th installment of Cognitive Biases covers the self-serving bias, offers a new interpretation of the Semmelweis reflex, and looks at the two sides of the serial position effect.

Self-Serving Bias

A self-serving bias occurs when people attribute their successes to internal or personal factors but attribute their failures to situational factors beyond their control: to take credit for success but to shift the blame for failure. It also occurs when we are presented with ambiguous information and evaluate it in the way that best suits our own interest.

Several reasons have been proposed to explain the occurrence of self-serving bias: maintaining self-esteem, making a good impression, or sometimes that we’re aware of factors outsiders might miss.

The bias has been demonstrated in many areas. For example, victims of serious occupational accidents tend to attribute their accidents to external factors, whereas their coworkers and management tend to attribute the accidents to the victims' own actions.

When the self-serving bias causes people to see Rashomon reality, the ability to negotiate can be dramatically impaired. One of the parties may see the other side as bluffing or completely unwilling to be reasonable, based on the self-serving interpretation of the ambiguous evidence.

In one experiment, subjects played the role of either the plaintiff or defendant in a hypothetical car accident case with a maximum potential damages payment of $100,000. The experiment used real money at the rate of $1 real = $10,000 experiment.

They then tried to settle in a fixed amount of time, and if they failed, the settlement amount would be charged a hefty legal bill. On average, plaintiffs thought the likely award would be $14,500 higher than the defendants. The further away the perceived “fair” figures were from each other strongly correlated with whether they could reach an agreement in time.

The self-serving bias, interestingly, seems not to exist in our struggles with personal computers. When we can’t get them to work, we blame ourselves rather than the technology. The reason is that people are so used to bad functionality, counterintuitive features, bugs, and sudden crashes of most contemporary software applications that they tend not to complain about them. Instead, they believe it is their personal responsibility to predict possible issues and to find solutions to computer problems. This unique phenomenon has been recently observed in several human-computer interaction investigations.


Semmelweis Reflex

Dr. Ignatz Semmelweis, assistant to the head of obstetrics at the Vienna General Hospital in the 1840s, discovered that his clinic, where doctors were trained, had a maternal mortality rate from puerperal fever (childbed fever) that averaged 10 percent. A second clinic, which trained midwives, had a mortality rate of only four percent.

This was well known outside the hospital. Semmelweis described women begging on their knees to go to the midwives clinic rather than risk the care of doctors. This, Semmelweis said, “made me so miserable that life seemed worthless.” Semmelweis started a systematic analysis to find out the cause, ruling out overcrowding, climate, and other factors before the death of an old friend from a condition similar to puerperal fever after being accidentally cut with a student’s scalpel during an autopsy.

Semmelweis imagined that some sort of “cadaverous particles” might be responsible, germs being at that time unknown. Midwives, after all, didn’t perform autopsies. Accordingly, Semmelweis required doctors to wash their hands in a mild bleach solution after performing autopsies. Following the change in procedures, death rates in the doctors clinic dropped almost immediately to the levels of the midwives clinic.

This theory contradicted medical belief of the time, and Semmelweis eventually was disgraced, lost his job, began accusing his fellow physicians of murder, and eventually died in a mental institution, possibly after being beaten by a guard.

Hence the Semmelweis effect: normally described as a reflex-like rejection of new knowledge because it contradicts entrenched norms, beliefs or paradigms: the “automatic rejection of the obvious, without thought, inspection, or experiment.”

Some credit Robert Anton Wilson for the phrase. Timothy Leary defined it as, “Mob behavior found among primates and larval hominids on undeveloped planets, in which a discovery of important scientific fact is punished.”

I don’t agree. I think there’s something else going on here.

The Semmelweis effect, I think, relates more to the implied threat and criticism the new knowledge has for old behavior. Let’s go back to Semmelweis’ original discovery. If his hypothesis about hand washing is correct, it means that physicians have contributed to the deaths of thousands of patients. Who wants to think of himself or herself as a killer, however inadvertent?


The Semmelweis reflex is, I think, better stated as the human tendency to reject or challenge scientific or other factual information that portrays us in a negative light. In that sense, it’s related to the phenomenon of reactance, discussed earlier.

In this case, Semmelweis’s own reaction to discovering the mortality rate of his clinic might have been a tip-off. He was “so miserable that life seemed worthless.” In his own case, this drove him to perform research, but these other doctors can only accept or deny the results. It’s not unreasonable to expect a certain amount of hostile response, and calling people “murderers,” as Semmelweis did, is hardly likely to win friends and influence people.

You don’t have to look far to find contemporary illustrations, from tobacco executives aghast someone dared accuse them of making a faulty product to the notorious Ford Motor Company indifference to safety in designing the Ford Pinto. The people involved weren’t trying to be unethical or immoral; they were in the grips of denial triggered by the Semmelweis reflex. This denial was strong enough to make them ignore or trivialize evidence that in retrospect appears conclusive.

When you’re accused of fault, watch for the Semmelweis reflex in yourself. The natural first impulse is to deny or deflect, but the right practice is to examine and explore. Depending on what you find, you can select a more reasoned strategy.



Serial Position Effect


The serial position effect, coined by Hermann Ebbinghaus, refers to the finding that recall accuracy varies as a function of an item's position within a study list. When asked to recall a list of items in any order (free recall), people tend to begin recall with the end of the list, recalling those items best (the recency effect). Among earlier list items, the first few items are recalled more frequently than the middle items (the primacy effect).

One suggested reason for the primacy effect is that the initial items presented are most effectively stored in long-term memory because of the greater amount of processing devoted to them. (The first list item can be rehearsed by itself; the second must be rehearsed along with the first, the third along with the first and second, and so on.) One suggested reason for the recency effect is that these items are still present in working memory when recall is solicited. Items that benefit from neither (the middle items) are recalled most poorly.

There is experimental support for these explanations. For example:


  • The primacy effect (but not the recency effect) is reduced when items are presented quickly and is enhanced when presented slowly (factors that reduce and enhance processing of each item and thus permanent storage).
  • The recency effect (but not the primacy effect) is reduced when an interfering task is given; for example, subjects may be asked to compute a math problem in their heads prior to recalling list items; this task requires working memory and interferes with any list items being attended to.
  • Amnesiacs with poor ability to form permanent long-term memories do not show a primacy effect, but do show a recency effect.


More next week.

To read the whole series, click "Cognitive bias" in the tag cloud to your right, or search for any individual bias the same way.

Tuesday, December 14, 2010

See No Evil (Part 17 of Cognitive Biases)

Our 17th installment features the selection bias, selective perception in general, and the self-fulfilling prophecy.


Selection Bias

There’s a growing argument that telephone polls, once the gold standard of scientific opinion surveys, are becoming less reliable. More and more people are refusing to participate, meaning that the actual sample becomes to some extent self-selected: a random sample of people who like to take polls. People who don’t like to take polls are underrepresented in the results, and there’s no guarantee that class feels the same as the class answering.

Selection bias can happen in any scientific study requiring a statistical sample that is representative of some larger population: if the selection is flawed, and if other statistical analysis does not correct for the skew, the conclusions are not reliable.

There are several types of selection bias:


  • Sampling bias. Systemic error resulting from a non-random population sample. Examples include self-selection, pre-screening, and discounting test subjects that don’t finish.
  • Time interval bias. Error resulting from a flawed selection of the time interval. Examples include starting on an unusually low year and ending on an unusually high one, terminating a trial early when its results support your desired conclusion or favoring larger or shorter intervals in measuring change.
  • Exposure bias. Error resulting from amplifying trends. When one disease predisposes someone for a second disease, the treatment for the first disease can appear correlated with the appearance of the second disease. An effective but not perfect treatment given to people at high risk of getting a particular disease could potentially result in the appearance of the treatment causing the disease, since the high-risk population would naturally include a higher number of people who got the treatment and the disease.
  • Data bias. Rejection of “bad” data on arbitrary grounds, ignoring or discounting outliers, partitioning data with knowledge of the partitions, then analyzing them with tests designed for blindly chosen ones.
  • Studies bias. Earlier, we looked at publication bias, the tendency to publish studies with positive results and ignore ones with negative results. If you put together a meta-analysis without correcting for publication bias, you’ve got a studies bias. Or you can perform repeated experiments and report only the favorable results, classifying the others as calibration tests or preliminary studies.
  • Attrition bias. A selection bias resulting from people dropping out of a study over time. If you study the effectiveness of a weight loss program only by measuring outcomes for people who complete the whole program, it’ll often look very effective indeed — but it ignores the potentially vast number of people who tried and gave up.


In general, you can’t overcome a selection biases with statistical analysis of existing data alone. Informal workarounds examine correlations between background variables and a treatment indicator, but what’s missing is the correlation between unobserved determinants of the outcome and unobserved determinants of selection into the sample that create the bias. What you don’t see doesn’t have to be identical to what you do see.


Selective Perception

Expectations affect perception.

We know people are suggestible: several studies have shown that students who were told they were consuming alcohol when they weren’t still got drunk enough their driving was affected.

In one classic study, viewers watched a filmstrip of a particularly violent Princeton-Dartmouth football game. Princeton viewers reported seeing nearly twice as many rule infractions committed by the Dartmouth team than did Dartmouth viewers. One Dartmouth alumnus did not see any infractions committed by the Dartmouth side and sent a message that he’d only seen part of the film and wanted the rest.

Selective perception is also an issue for advertisers, as consumers may engage with some ads and not others based on their pre-existing beliefs about the brand. Seymour Smith, a prominent advertising researcher, found evidence for selective perception in advertising research in the early 1960s. People who like, buy, or are considering buying a brand are more likely to notice advertising than are those who are neutral toward the brand. It’s hard to measure the quality of the advertising if the only people who notice it are already predisposed to like the brand.



Self-Fulfilling Prophecy

A self-fulfilling prophecy is a prediction that directly or indirectly causes itself to become true, by the very terms of the prophecy itself, due to positive feedback between belief and behavior. The term was coined by sociologist Robert K. Merton, who formalized its structure and consequences in his 1949 book Social Theory and Social Structure.

A self-fulfilling prophecy is initially false: it becomes true by evoking the behavior that makes it come true. The actual course of events is offered as proof that the prophecy was originally true.

Self-fulfilling prophecies have been used in education as a type of placebo effect.

The effects of teacher attitudes, beliefs and values, affecting their expectations have been tested repeatedly. A famous example includes a study where teachers were told arbitrarily that random students were "going to blossom". The prophecy indeed self-fulfilled:  those random students actually ended the year with significantly greater improvements.


For previous installments, click on "Cognitive Bias" in the tag cloud to your right.

Tuesday, December 7, 2010

Lead Us Into Temptation (Part 16 of Cognitive Biases)

Our survey of cognitive biases has reached the letter "R," and today we'll look at reactance, the tendency to do the opposite of whatever you're told; the reminiscence bump, the tendency to remember some parts of your life more vividly than others; restraint bias, the idea that we can resist temptation; and rosy retrospection, the tendency to remember the past as better than it is.


Reactance


Reactance is the bias to do the opposite of whatever you’re being pushed to do. It’s the impulse to disobey, to resist any threat to your perceived sense of autonomy. Reactance is what happens when you feel your freedom is threatened.

What turns it into a bias is when the reactance leads you to act in ways contrary to your own self-interest. Get pushed hard enough to get a good job and make some money, and you may ruin a big interview just to show you won’t be pushed around.

There are four stages to reactance:

  • Perceived freedom. Something we have the physical capability to do, or refrain from doing. This can be anything imaginable.
  • Threat to freedom. A force that is attempting to limit your freedom. This doesn’t have to be a person or group, again, it can be anything. People react against the laws of physics all the time.
  • Reactance. An emotional pressure to resist the threat and retain the freedom.
  • Restoration of freedom. This can be either direct (you win), or indirect (you lose, but you continue resistance or shift the area of battle).


There are some rules to this. A pretty obvious one is that the magnitude of the reactance grows depending on the importance of the freedom in question. The magnitude of the reactance also grows when a wider swath of freedoms are threatened, even if individually they’re less important. And the magnitude of the reactance depends not only on the freedoms being threatened today, but on the implied threat to future freedom loss.

Lowering the degree of reactance is the degree to which you feel the infringement of your freedom is justified and legitimate. Less confrontational approaches lower reactance in other people.

Reminiscence Bump

Another cognitive bias is the unequal distribution of memories over a lifespan. We begin with infantile amnesia, the tendency not to remember much before the age of four. We remember something of our childhoods, but we recall more personal events from adolescence and early adulthood than anything before or after, except for whatever happened most recently.

Besides personal events, the reminiscence bump affects the temporal distribution of public events (where were you when JFK was shot/the Challenger exploded/the Towers fell?), favorite songs, books and movies. It’s why, after all these years, I still can’t forget the lyrics to Herman’s Hermits “Henry VIII.”

Restraint Bias

“Lead us not into temptation,” says the Lord’s Prayer. The restraint bias is the extent to which we tend to overestimate our ability to show restraint in the face of temptation, and as the Lord’s Prayer suggests, we aren’t nearly as good at it as we think we are.

In a recent study at Northwestern’s Kellogg School of Management, researchers studied the effects hunger, drug and tobacco cravings, and sexual arousal had on the self-control process, first by surveying people on their self-assessed capacity to resist temptation, then by actual temptation, and the results showed a substantial overestimation on the part of most people.

This is one of the ways people inadvertently sabotage efforts to change behavior, by overexposing themselves to temptation. Recovering tobacco smokers with more inflated degrees of restraint bias were far more likely to expose themselves to situation in which they would be tempted to smoke, with predictably higher rates of relapse in a four-month period.

Rosy Retrospection

Three groups going on different vacations were interviewed before, during, and after their trips. The typical emotional pattern was initial anticipation, followed by mild disappointment during the trip — and ending up with a much more favorable set of memories some time later!

The cognitive bias of rosy retrospection leads us to compare the present unfavorably when compared to the past, but the difference is that minor annoyances and dislikes, prominent in immediate memory, tend to fade over time.

Once again, Daniel Kahneman and Amos Tversky, our gurus of bias, come to the rescue with a technique called reference class forecasting. This corrects for rosy retrospection and other memory biases. Human judgment, they argue, is generally optimistic for two reasons: overconfidence and insufficient consideration of the range of actual likely outcomes. Unless you consider the issue of risk and uncertainty, you have no good basis to build on.

Reference class forecasting for a specific project involves the following three steps:
  1. Identify a reference class of past, similar projects.
  2. Establish a probability distribution for the selected reference class for the parameter that is being forecast.
  3. Compare the specific project with the reference class distribution, in order to establish the most likely outcome for the specific project.
The technique has been successful enough that it’s been endorsed by the American Planning Association (APA) and the Association for the Advancement of Cost Engineering (AACE).


More next week.

Previous Installments

You can find the bias you’re interested in by clicking in the tag cloud on the right. To find all posts concerning cognitive biases, click the very big phrase.

Part 1 — Bias blind spot, confirmation bias, déformation professionnelle, denomination effect, moral credential effect.

Part 2 — Base rate fallacy, congruence bias, experimenter’s bias

Part 3 — Ambiguity aversion effect (Ellsberg paradox), choice-supportive bias, distinction bias, contrast effect

Part 4 — Actor-observer bias, anchoring effect, attentional bias, availability cascade, belief bias

Part 5 — Clustering illusion, conjunction fallacy, cryptomnesia

Part 6 — Disposition effect, egocentric bias, endowment effect, extraordinarity bias

Part 7 — False consensus effect, false memory, Forer effect, framing, fundamental attribution error

Part 8 — Gambler’s fallacy, halo effect

Part 9 — Hawthorne effect, herd instinct, hindsight bias, hyperbolic discounting

Part 10 — Illusion of asymmetric insight, illusion of control, illusory superiority, impact bias, information bias, ingroup bias, irrational escalation

Part 11 — Just-world phenomenon, loss aversion, ludic fallacy, mere exposure effect, money illusion

Part 12 — Need for closure, neglect of probability, “not-invented-here” (NIH) syndrome, notational bias

Part 13 — Observer-expectancy effect, omission bias, optimism bias, ostrich effect, outgroup homogeneity bias, overconfidence effect

Part 14 — Pareidolia, planning fallacy, post-purchase rationalization


Part 15 — Projection bias, pseudocertainty effect, publication bias