Tuesday, November 30, 2010

Are You a Good Witch or a Bad Witch? (Part 15 of Cognitive Biases)

After a long hiatus while we renovated our house, our survey of cognitive biases continues with Pr-Pu. In this installment, we'll learn about projection bias, the pseudocertainty effect, and publication bias. Next week will be brought to you by the lettter "R."


Projection Bias

Sigmund Freud named this bias, a psychological defense mechanism in which we unconsciously deny our own attributes, thoughts, or emotions and ascribe them to the outside world, whether to other people or to phenomena like the weather…or in one famous case, witches.

Projection bias is one of the medical explanations of bewitchment that attempts to diagnose the behavior of the afflicted children at Salem in 1692. The historian John Demos asserts that the symptoms of bewitchment experienced by the afflicted girls in Salem during the witchcraft crisis were because the girls were undergoing psychological projection. Demos argues the girls had convulsive fits caused by repressed aggression and were able to project this aggression without blame because of the speculation of witchcraft and bewitchment.

The Salem Witch Trials affected a community under considerable strife: property lines, grazing rights, and upheavals in the church had all given Salem Village a reputation as quarrelsome. Population pressures from increasing family size built demand for farmland. And in the Puritan culture, anything from loss of crops or livestock, illness or death of children, and even bad weather were generally seen as the wrath of God in action.

The Salem witches were hardly the first accused witches in the area. Making ccusations of witchcraft against widowed or orphaned land-owning women was a good way to take their land. And, of course, witches served as a good target for the projection bias: all the ill feelings and bad conduct of the community were projected onto a group that couldn’t fight back.

The Salem Witch Trials claimed twenty victims.

Pseudocertainty Effect

Which of the following options do you prefer?

C. 25% chance to win $30 and 75% chance to win nothing
D. 20% chance to win $45 and 80% chance to win nothing

Now consider the following two stage game. In the first stage, there is a 75% chance to end the game without winning anything, and a 25% chance to move into the second stage. If you reach the second stage you have a choice between:

E. a sure win of $30
F. 80% chance to win $45 and 20% chance to win nothing

You have to make your choice before the first stage.

Here's how most people choose:

In the first problem, 42% of participants chose option C while 58% chose option D. In the second, 74% of participants chose option E while only 26% chose option F.

The actual probability of winning money in option E (25% x 100% = 25%) and option F (25% x 80% = 20%) is the same as the probability of winning money in option C (25%) and option D (20%) respectively.

If the probability of winning money is the same, why do people choose differently? The answer is the pseudocertainty effect: the tendency to perceive an outcome as if it is certain when it’s actually uncertain. It’s most easily observed in multi-stage decisions like the second problem.
In the second problem, since individuals have no choice on options in the first stage, individuals tend to discard the first stage (75% chance of winning nothing), and only consider the second, where there’s a choice.

Publication Bias

Out of a hundred scientific studies where 95% of them had a negative outcome (no correlation found) and 5% had a positive outcome (correlation found), which do you think is more likely to get into print?

The publication bias is, simply, that positive results are more likely to get published than negative ones. This is also known as the file drawer problem: many studies in a given area of research are conducted but never reported, and those that are not reported may on average report different results from those that are reported. Even a small number of studies lost "in the file drawer" can result in a significant bias.

The effect is compounded with meta-analyses and systematic reviews, which often form the basis for evidence-based medicine, and is further complicated when some of the research is sponsored by people and companies with a financial interest in positive results.

According to researcher John Ioannidis, negative papers are most likely to be suppressed:

when the studies conducted in a field are smaller
when effect sizes are smaller
when there is a greater number and lesser preselection of tested relationships
where there is greater flexibility in designs, definitions, outcomes, and analytical modes
when there is greater financial and other interest and prejudice
when more teams are involved in a scientific field in chase of statistical significance.

Ioannidis observes that "claimed research findings may often be simply accurate measures of the prevailing bias.” In an effort to decrease this problem some prominent medical journals, starting in 2004, began requiring registration of a trial before it commences so that unfavorable results are not withheld from publication.

More next week.

Previous Installments

You can find the bias you’re interested in by clicking in the tag cloud on the right. To find all posts concerning cognitive biases, click the very big phrase.

Part 1 — Bias blind spot, confirmation bias, déformation professionnelle, denomination effect, moral credential effect.

Part 2 — Base rate fallacy, congruence bias, experimenter’s bias

Part 3 — Ambiguity aversion effect (Ellsberg paradox), choice-supportive bias, distinction bias, contrast effect

Part 4 — Actor-observer bias, anchoring effect, attentional bias, availability cascade, belief bias

Part 5 — Clustering illusion, conjunction fallacy, cryptomnesia

Part 6 — Disposition effect, egocentric bias, endowment effect, extraordinarity bias

Part 7 — False consensus effect, false memory, Forer effect, framing, fundamental attribution error

Part 8 — Gambler’s fallacy, halo effect

Part 9 — Hawthorne effect, herd instinct, hindsight bias, hyperbolic discounting

Part 10 — Illusion of asymmetric insight, illusion of control, illusory superiority, impact bias, information bias, ingroup bias, irrational escalation

Part 11 — Just-world phenomenon, loss aversion, ludic fallacy, mere exposure effect, money illusion

Part 12 — Need for closure, neglect of probability, “not-invented-here” (NIH) syndrome, notational bias

Part 13 — Observer-expectancy effect, omission bias, optimism bias, ostrich effect, outgroup homogeneity bias, overconfidence effect

Part 14 — Pareidolia, planning fallacy, post-purchase rationalization

No comments:

Post a Comment