Tuesday, December 28, 2010

When 1+1=3 (Part 19 of Cognitive Biases)

Our 19th installment of Cognitive Biases covers the status quo bias, stereotyping, and the subadditivity effect.

Status Quo Bias

Sigmund Freud suggested that there were only two reasons people changed: pain and pressure. Evidence for the status quo bias, a preference not to change established behavior (even if negative) unless the incentive to change is overwhelming, comes from many fields, including political science and economics.

Another way to look at the status quo bias is inertia: the tendency of objects at rest to remain at rest until acted upon by an outside force. The corollary, that objects once in motion tend to stay in motion until acted upon by an outside force, gives hope for change. Unfortunately, one of those outside forces is friction, which is as easy to see in human affairs as it is in the rest of the material universe.

Daniel Kahneman (this time without Amos Tversky) has created experiments that can produce status quo bias effects reliably. It seems to be a combination of loss aversion and the endowment effect, both described elsewhere.

The status quo bias should be distinguished from a rational preference for the status quo in any particular incident. Change is not in itself always good.


A stereotype, strictly speaking, is a commonly held popular belief about a specific social group or type of individual. It’s not identical to prejudice:

  • Prejudices are abstract-general preconceptions or abstract-general attitudes towards any type of situation, object, or person.
  • Stereotypes are generalizations of existing characteristics that reduce complexity.

The word stereotype originally comes from printing: a duplicate impression of an original typographic element used for printing instead of the original. (A cliché, interestingly, is the technical term for the printing surface of a stereotype.) It was journalist Walter Lippmann who first used the word in its modern interpersonal sense. A stereotype is a “picture in our heads,“ he wrote, “whether right or wrong.“

Mental categorizing and labeling is both necessary and inescapable. Automatic stereotyping is natural; the necessary (but often omitted) follow-up is to make a conscious check to adjust the impression.

A number of theories have been derived from sociological studies of stereotyping and prejudicial thinking. In early studies it was believed that stereotypes were only used by rigid, repressed, and authoritarian people. Sociologists concluded that this was a result of conflict, poor parenting, and inadequate mental and emotional development. This idea has been overturned; more recent studies have concluded that stereotypes are commonplace.

One theory as to why people stereotype is that it is too difficult to take in all of the complexities of other people as individuals. Even though stereotyping is inexact, it is an efficient way to mentally organize large blocks of information. Categorization is an essential human capability because it enables us to simplify, predict, and organize our world. Once one has sorted and organized everyone into tidy categories, there is a human tendency to avoid processing new or unexpected information about each individual. Assigning general group characteristics to members of that group saves time and satisfies the need to predict the social world in a general sense.

Another theory is that people stereotype because of the need to feel good about oneself. Stereotypes protect one from anxiety and enhance self-esteem. By designating one's own group as the standard or normal group and assigning others to groups considered inferior or abnormal, it provides one with a sense of worth, and in that sense, stereotyping is related to the ingroup bias.

Subadditivity Effect

The subadditivity effect is the tendency to judge probability of the whole to be less than the probabilities of the parts.

For instance, subjects in one experiment judged the probability of death from cancer in the United States was 18%, the probability from heart attack was 22%, and the probability of death from "other natural causes" was 33%. Other participants judged the probability of death from a natural cause was 58%. Natural causes are made up of precisely cancer, heart attack, and "other natural causes," however, the sum of the latter three probabilities is 73%. According to Tversky and Koehler in a 1994 study, this kind of result is observed consistently.

The subadditivity effect is related to other math-oriented cognitive biases, including the denomination effect, the base rate fallacy, and especially the conjunction fallacy.

More next week.

To read the whole series, click "Cognitive bias" in the tag cloud to your right, or search for any individual bias the same way.

Tuesday, December 21, 2010

Rashomon Reality (Part 18 of Cognitive Biases)

Our 18th installment of Cognitive Biases covers the self-serving bias, offers a new interpretation of the Semmelweis reflex, and looks at the two sides of the serial position effect.

Self-Serving Bias

A self-serving bias occurs when people attribute their successes to internal or personal factors but attribute their failures to situational factors beyond their control: to take credit for success but to shift the blame for failure. It also occurs when we are presented with ambiguous information and evaluate it in the way that best suits our own interest.

Several reasons have been proposed to explain the occurrence of self-serving bias: maintaining self-esteem, making a good impression, or sometimes that we’re aware of factors outsiders might miss.

The bias has been demonstrated in many areas. For example, victims of serious occupational accidents tend to attribute their accidents to external factors, whereas their coworkers and management tend to attribute the accidents to the victims' own actions.

When the self-serving bias causes people to see Rashomon reality, the ability to negotiate can be dramatically impaired. One of the parties may see the other side as bluffing or completely unwilling to be reasonable, based on the self-serving interpretation of the ambiguous evidence.

In one experiment, subjects played the role of either the plaintiff or defendant in a hypothetical car accident case with a maximum potential damages payment of $100,000. The experiment used real money at the rate of $1 real = $10,000 experiment.

They then tried to settle in a fixed amount of time, and if they failed, the settlement amount would be charged a hefty legal bill. On average, plaintiffs thought the likely award would be $14,500 higher than the defendants. The further away the perceived “fair” figures were from each other strongly correlated with whether they could reach an agreement in time.

The self-serving bias, interestingly, seems not to exist in our struggles with personal computers. When we can’t get them to work, we blame ourselves rather than the technology. The reason is that people are so used to bad functionality, counterintuitive features, bugs, and sudden crashes of most contemporary software applications that they tend not to complain about them. Instead, they believe it is their personal responsibility to predict possible issues and to find solutions to computer problems. This unique phenomenon has been recently observed in several human-computer interaction investigations.

Semmelweis Reflex

Dr. Ignatz Semmelweis, assistant to the head of obstetrics at the Vienna General Hospital in the 1840s, discovered that his clinic, where doctors were trained, had a maternal mortality rate from puerperal fever (childbed fever) that averaged 10 percent. A second clinic, which trained midwives, had a mortality rate of only four percent.

This was well known outside the hospital. Semmelweis described women begging on their knees to go to the midwives clinic rather than risk the care of doctors. This, Semmelweis said, “made me so miserable that life seemed worthless.” Semmelweis started a systematic analysis to find out the cause, ruling out overcrowding, climate, and other factors before the death of an old friend from a condition similar to puerperal fever after being accidentally cut with a student’s scalpel during an autopsy.

Semmelweis imagined that some sort of “cadaverous particles” might be responsible, germs being at that time unknown. Midwives, after all, didn’t perform autopsies. Accordingly, Semmelweis required doctors to wash their hands in a mild bleach solution after performing autopsies. Following the change in procedures, death rates in the doctors clinic dropped almost immediately to the levels of the midwives clinic.

This theory contradicted medical belief of the time, and Semmelweis eventually was disgraced, lost his job, began accusing his fellow physicians of murder, and eventually died in a mental institution, possibly after being beaten by a guard.

Hence the Semmelweis effect: normally described as a reflex-like rejection of new knowledge because it contradicts entrenched norms, beliefs or paradigms: the “automatic rejection of the obvious, without thought, inspection, or experiment.”

Some credit Robert Anton Wilson for the phrase. Timothy Leary defined it as, “Mob behavior found among primates and larval hominids on undeveloped planets, in which a discovery of important scientific fact is punished.”

I don’t agree. I think there’s something else going on here.

The Semmelweis effect, I think, relates more to the implied threat and criticism the new knowledge has for old behavior. Let’s go back to Semmelweis’ original discovery. If his hypothesis about hand washing is correct, it means that physicians have contributed to the deaths of thousands of patients. Who wants to think of himself or herself as a killer, however inadvertent?

The Semmelweis reflex is, I think, better stated as the human tendency to reject or challenge scientific or other factual information that portrays us in a negative light. In that sense, it’s related to the phenomenon of reactance, discussed earlier.

In this case, Semmelweis’s own reaction to discovering the mortality rate of his clinic might have been a tip-off. He was “so miserable that life seemed worthless.” In his own case, this drove him to perform research, but these other doctors can only accept or deny the results. It’s not unreasonable to expect a certain amount of hostile response, and calling people “murderers,” as Semmelweis did, is hardly likely to win friends and influence people.

You don’t have to look far to find contemporary illustrations, from tobacco executives aghast someone dared accuse them of making a faulty product to the notorious Ford Motor Company indifference to safety in designing the Ford Pinto. The people involved weren’t trying to be unethical or immoral; they were in the grips of denial triggered by the Semmelweis reflex. This denial was strong enough to make them ignore or trivialize evidence that in retrospect appears conclusive.

When you’re accused of fault, watch for the Semmelweis reflex in yourself. The natural first impulse is to deny or deflect, but the right practice is to examine and explore. Depending on what you find, you can select a more reasoned strategy.

Serial Position Effect

The serial position effect, coined by Hermann Ebbinghaus, refers to the finding that recall accuracy varies as a function of an item's position within a study list. When asked to recall a list of items in any order (free recall), people tend to begin recall with the end of the list, recalling those items best (the recency effect). Among earlier list items, the first few items are recalled more frequently than the middle items (the primacy effect).

One suggested reason for the primacy effect is that the initial items presented are most effectively stored in long-term memory because of the greater amount of processing devoted to them. (The first list item can be rehearsed by itself; the second must be rehearsed along with the first, the third along with the first and second, and so on.) One suggested reason for the recency effect is that these items are still present in working memory when recall is solicited. Items that benefit from neither (the middle items) are recalled most poorly.

There is experimental support for these explanations. For example:

  • The primacy effect (but not the recency effect) is reduced when items are presented quickly and is enhanced when presented slowly (factors that reduce and enhance processing of each item and thus permanent storage).
  • The recency effect (but not the primacy effect) is reduced when an interfering task is given; for example, subjects may be asked to compute a math problem in their heads prior to recalling list items; this task requires working memory and interferes with any list items being attended to.
  • Amnesiacs with poor ability to form permanent long-term memories do not show a primacy effect, but do show a recency effect.

More next week.

To read the whole series, click "Cognitive bias" in the tag cloud to your right, or search for any individual bias the same way.

Tuesday, December 14, 2010

See No Evil (Part 17 of Cognitive Biases)

Our 17th installment features the selection bias, selective perception in general, and the self-fulfilling prophecy.

Selection Bias

There’s a growing argument that telephone polls, once the gold standard of scientific opinion surveys, are becoming less reliable. More and more people are refusing to participate, meaning that the actual sample becomes to some extent self-selected: a random sample of people who like to take polls. People who don’t like to take polls are underrepresented in the results, and there’s no guarantee that class feels the same as the class answering.

Selection bias can happen in any scientific study requiring a statistical sample that is representative of some larger population: if the selection is flawed, and if other statistical analysis does not correct for the skew, the conclusions are not reliable.

There are several types of selection bias:

  • Sampling bias. Systemic error resulting from a non-random population sample. Examples include self-selection, pre-screening, and discounting test subjects that don’t finish.
  • Time interval bias. Error resulting from a flawed selection of the time interval. Examples include starting on an unusually low year and ending on an unusually high one, terminating a trial early when its results support your desired conclusion or favoring larger or shorter intervals in measuring change.
  • Exposure bias. Error resulting from amplifying trends. When one disease predisposes someone for a second disease, the treatment for the first disease can appear correlated with the appearance of the second disease. An effective but not perfect treatment given to people at high risk of getting a particular disease could potentially result in the appearance of the treatment causing the disease, since the high-risk population would naturally include a higher number of people who got the treatment and the disease.
  • Data bias. Rejection of “bad” data on arbitrary grounds, ignoring or discounting outliers, partitioning data with knowledge of the partitions, then analyzing them with tests designed for blindly chosen ones.
  • Studies bias. Earlier, we looked at publication bias, the tendency to publish studies with positive results and ignore ones with negative results. If you put together a meta-analysis without correcting for publication bias, you’ve got a studies bias. Or you can perform repeated experiments and report only the favorable results, classifying the others as calibration tests or preliminary studies.
  • Attrition bias. A selection bias resulting from people dropping out of a study over time. If you study the effectiveness of a weight loss program only by measuring outcomes for people who complete the whole program, it’ll often look very effective indeed — but it ignores the potentially vast number of people who tried and gave up.

In general, you can’t overcome a selection biases with statistical analysis of existing data alone. Informal workarounds examine correlations between background variables and a treatment indicator, but what’s missing is the correlation between unobserved determinants of the outcome and unobserved determinants of selection into the sample that create the bias. What you don’t see doesn’t have to be identical to what you do see.

Selective Perception

Expectations affect perception.

We know people are suggestible: several studies have shown that students who were told they were consuming alcohol when they weren’t still got drunk enough their driving was affected.

In one classic study, viewers watched a filmstrip of a particularly violent Princeton-Dartmouth football game. Princeton viewers reported seeing nearly twice as many rule infractions committed by the Dartmouth team than did Dartmouth viewers. One Dartmouth alumnus did not see any infractions committed by the Dartmouth side and sent a message that he’d only seen part of the film and wanted the rest.

Selective perception is also an issue for advertisers, as consumers may engage with some ads and not others based on their pre-existing beliefs about the brand. Seymour Smith, a prominent advertising researcher, found evidence for selective perception in advertising research in the early 1960s. People who like, buy, or are considering buying a brand are more likely to notice advertising than are those who are neutral toward the brand. It’s hard to measure the quality of the advertising if the only people who notice it are already predisposed to like the brand.

Self-Fulfilling Prophecy

A self-fulfilling prophecy is a prediction that directly or indirectly causes itself to become true, by the very terms of the prophecy itself, due to positive feedback between belief and behavior. The term was coined by sociologist Robert K. Merton, who formalized its structure and consequences in his 1949 book Social Theory and Social Structure.

A self-fulfilling prophecy is initially false: it becomes true by evoking the behavior that makes it come true. The actual course of events is offered as proof that the prophecy was originally true.

Self-fulfilling prophecies have been used in education as a type of placebo effect.

The effects of teacher attitudes, beliefs and values, affecting their expectations have been tested repeatedly. A famous example includes a study where teachers were told arbitrarily that random students were "going to blossom". The prophecy indeed self-fulfilled:  those random students actually ended the year with significantly greater improvements.

For previous installments, click on "Cognitive Bias" in the tag cloud to your right.

Tuesday, December 7, 2010

Lead Us Into Temptation (Part 16 of Cognitive Biases)

Our survey of cognitive biases has reached the letter "R," and today we'll look at reactance, the tendency to do the opposite of whatever you're told; the reminiscence bump, the tendency to remember some parts of your life more vividly than others; restraint bias, the idea that we can resist temptation; and rosy retrospection, the tendency to remember the past as better than it is.


Reactance is the bias to do the opposite of whatever you’re being pushed to do. It’s the impulse to disobey, to resist any threat to your perceived sense of autonomy. Reactance is what happens when you feel your freedom is threatened.

What turns it into a bias is when the reactance leads you to act in ways contrary to your own self-interest. Get pushed hard enough to get a good job and make some money, and you may ruin a big interview just to show you won’t be pushed around.

There are four stages to reactance:

  • Perceived freedom. Something we have the physical capability to do, or refrain from doing. This can be anything imaginable.
  • Threat to freedom. A force that is attempting to limit your freedom. This doesn’t have to be a person or group, again, it can be anything. People react against the laws of physics all the time.
  • Reactance. An emotional pressure to resist the threat and retain the freedom.
  • Restoration of freedom. This can be either direct (you win), or indirect (you lose, but you continue resistance or shift the area of battle).

There are some rules to this. A pretty obvious one is that the magnitude of the reactance grows depending on the importance of the freedom in question. The magnitude of the reactance also grows when a wider swath of freedoms are threatened, even if individually they’re less important. And the magnitude of the reactance depends not only on the freedoms being threatened today, but on the implied threat to future freedom loss.

Lowering the degree of reactance is the degree to which you feel the infringement of your freedom is justified and legitimate. Less confrontational approaches lower reactance in other people.

Reminiscence Bump

Another cognitive bias is the unequal distribution of memories over a lifespan. We begin with infantile amnesia, the tendency not to remember much before the age of four. We remember something of our childhoods, but we recall more personal events from adolescence and early adulthood than anything before or after, except for whatever happened most recently.

Besides personal events, the reminiscence bump affects the temporal distribution of public events (where were you when JFK was shot/the Challenger exploded/the Towers fell?), favorite songs, books and movies. It’s why, after all these years, I still can’t forget the lyrics to Herman’s Hermits “Henry VIII.”

Restraint Bias

“Lead us not into temptation,” says the Lord’s Prayer. The restraint bias is the extent to which we tend to overestimate our ability to show restraint in the face of temptation, and as the Lord’s Prayer suggests, we aren’t nearly as good at it as we think we are.

In a recent study at Northwestern’s Kellogg School of Management, researchers studied the effects hunger, drug and tobacco cravings, and sexual arousal had on the self-control process, first by surveying people on their self-assessed capacity to resist temptation, then by actual temptation, and the results showed a substantial overestimation on the part of most people.

This is one of the ways people inadvertently sabotage efforts to change behavior, by overexposing themselves to temptation. Recovering tobacco smokers with more inflated degrees of restraint bias were far more likely to expose themselves to situation in which they would be tempted to smoke, with predictably higher rates of relapse in a four-month period.

Rosy Retrospection

Three groups going on different vacations were interviewed before, during, and after their trips. The typical emotional pattern was initial anticipation, followed by mild disappointment during the trip — and ending up with a much more favorable set of memories some time later!

The cognitive bias of rosy retrospection leads us to compare the present unfavorably when compared to the past, but the difference is that minor annoyances and dislikes, prominent in immediate memory, tend to fade over time.

Once again, Daniel Kahneman and Amos Tversky, our gurus of bias, come to the rescue with a technique called reference class forecasting. This corrects for rosy retrospection and other memory biases. Human judgment, they argue, is generally optimistic for two reasons: overconfidence and insufficient consideration of the range of actual likely outcomes. Unless you consider the issue of risk and uncertainty, you have no good basis to build on.

Reference class forecasting for a specific project involves the following three steps:
  1. Identify a reference class of past, similar projects.
  2. Establish a probability distribution for the selected reference class for the parameter that is being forecast.
  3. Compare the specific project with the reference class distribution, in order to establish the most likely outcome for the specific project.
The technique has been successful enough that it’s been endorsed by the American Planning Association (APA) and the Association for the Advancement of Cost Engineering (AACE).

More next week.

Previous Installments

You can find the bias you’re interested in by clicking in the tag cloud on the right. To find all posts concerning cognitive biases, click the very big phrase.

Part 1 — Bias blind spot, confirmation bias, déformation professionnelle, denomination effect, moral credential effect.

Part 2 — Base rate fallacy, congruence bias, experimenter’s bias

Part 3 — Ambiguity aversion effect (Ellsberg paradox), choice-supportive bias, distinction bias, contrast effect

Part 4 — Actor-observer bias, anchoring effect, attentional bias, availability cascade, belief bias

Part 5 — Clustering illusion, conjunction fallacy, cryptomnesia

Part 6 — Disposition effect, egocentric bias, endowment effect, extraordinarity bias

Part 7 — False consensus effect, false memory, Forer effect, framing, fundamental attribution error

Part 8 — Gambler’s fallacy, halo effect

Part 9 — Hawthorne effect, herd instinct, hindsight bias, hyperbolic discounting

Part 10 — Illusion of asymmetric insight, illusion of control, illusory superiority, impact bias, information bias, ingroup bias, irrational escalation

Part 11 — Just-world phenomenon, loss aversion, ludic fallacy, mere exposure effect, money illusion

Part 12 — Need for closure, neglect of probability, “not-invented-here” (NIH) syndrome, notational bias

Part 13 — Observer-expectancy effect, omission bias, optimism bias, ostrich effect, outgroup homogeneity bias, overconfidence effect

Part 14 — Pareidolia, planning fallacy, post-purchase rationalization

Part 15 — Projection bias, pseudocertainty effect, publication bias

Tuesday, November 30, 2010

Are You a Good Witch or a Bad Witch? (Part 15 of Cognitive Biases)

After a long hiatus while we renovated our house, our survey of cognitive biases continues with Pr-Pu. In this installment, we'll learn about projection bias, the pseudocertainty effect, and publication bias. Next week will be brought to you by the lettter "R."

Projection Bias

Sigmund Freud named this bias, a psychological defense mechanism in which we unconsciously deny our own attributes, thoughts, or emotions and ascribe them to the outside world, whether to other people or to phenomena like the weather…or in one famous case, witches.

Projection bias is one of the medical explanations of bewitchment that attempts to diagnose the behavior of the afflicted children at Salem in 1692. The historian John Demos asserts that the symptoms of bewitchment experienced by the afflicted girls in Salem during the witchcraft crisis were because the girls were undergoing psychological projection. Demos argues the girls had convulsive fits caused by repressed aggression and were able to project this aggression without blame because of the speculation of witchcraft and bewitchment.

The Salem Witch Trials affected a community under considerable strife: property lines, grazing rights, and upheavals in the church had all given Salem Village a reputation as quarrelsome. Population pressures from increasing family size built demand for farmland. And in the Puritan culture, anything from loss of crops or livestock, illness or death of children, and even bad weather were generally seen as the wrath of God in action.

The Salem witches were hardly the first accused witches in the area. Making ccusations of witchcraft against widowed or orphaned land-owning women was a good way to take their land. And, of course, witches served as a good target for the projection bias: all the ill feelings and bad conduct of the community were projected onto a group that couldn’t fight back.

The Salem Witch Trials claimed twenty victims.

Pseudocertainty Effect

Which of the following options do you prefer?

C. 25% chance to win $30 and 75% chance to win nothing
D. 20% chance to win $45 and 80% chance to win nothing

Now consider the following two stage game. In the first stage, there is a 75% chance to end the game without winning anything, and a 25% chance to move into the second stage. If you reach the second stage you have a choice between:

E. a sure win of $30
F. 80% chance to win $45 and 20% chance to win nothing

You have to make your choice before the first stage.

Here's how most people choose:

In the first problem, 42% of participants chose option C while 58% chose option D. In the second, 74% of participants chose option E while only 26% chose option F.

The actual probability of winning money in option E (25% x 100% = 25%) and option F (25% x 80% = 20%) is the same as the probability of winning money in option C (25%) and option D (20%) respectively.

If the probability of winning money is the same, why do people choose differently? The answer is the pseudocertainty effect: the tendency to perceive an outcome as if it is certain when it’s actually uncertain. It’s most easily observed in multi-stage decisions like the second problem.
In the second problem, since individuals have no choice on options in the first stage, individuals tend to discard the first stage (75% chance of winning nothing), and only consider the second, where there’s a choice.

Publication Bias

Out of a hundred scientific studies where 95% of them had a negative outcome (no correlation found) and 5% had a positive outcome (correlation found), which do you think is more likely to get into print?

The publication bias is, simply, that positive results are more likely to get published than negative ones. This is also known as the file drawer problem: many studies in a given area of research are conducted but never reported, and those that are not reported may on average report different results from those that are reported. Even a small number of studies lost "in the file drawer" can result in a significant bias.

The effect is compounded with meta-analyses and systematic reviews, which often form the basis for evidence-based medicine, and is further complicated when some of the research is sponsored by people and companies with a financial interest in positive results.

According to researcher John Ioannidis, negative papers are most likely to be suppressed:

when the studies conducted in a field are smaller
when effect sizes are smaller
when there is a greater number and lesser preselection of tested relationships
where there is greater flexibility in designs, definitions, outcomes, and analytical modes
when there is greater financial and other interest and prejudice
when more teams are involved in a scientific field in chase of statistical significance.

Ioannidis observes that "claimed research findings may often be simply accurate measures of the prevailing bias.” In an effort to decrease this problem some prominent medical journals, starting in 2004, began requiring registration of a trial before it commences so that unfavorable results are not withheld from publication.

More next week.

Previous Installments

You can find the bias you’re interested in by clicking in the tag cloud on the right. To find all posts concerning cognitive biases, click the very big phrase.

Part 1 — Bias blind spot, confirmation bias, déformation professionnelle, denomination effect, moral credential effect.

Part 2 — Base rate fallacy, congruence bias, experimenter’s bias

Part 3 — Ambiguity aversion effect (Ellsberg paradox), choice-supportive bias, distinction bias, contrast effect

Part 4 — Actor-observer bias, anchoring effect, attentional bias, availability cascade, belief bias

Part 5 — Clustering illusion, conjunction fallacy, cryptomnesia

Part 6 — Disposition effect, egocentric bias, endowment effect, extraordinarity bias

Part 7 — False consensus effect, false memory, Forer effect, framing, fundamental attribution error

Part 8 — Gambler’s fallacy, halo effect

Part 9 — Hawthorne effect, herd instinct, hindsight bias, hyperbolic discounting

Part 10 — Illusion of asymmetric insight, illusion of control, illusory superiority, impact bias, information bias, ingroup bias, irrational escalation

Part 11 — Just-world phenomenon, loss aversion, ludic fallacy, mere exposure effect, money illusion

Part 12 — Need for closure, neglect of probability, “not-invented-here” (NIH) syndrome, notational bias

Part 13 — Observer-expectancy effect, omission bias, optimism bias, ostrich effect, outgroup homogeneity bias, overconfidence effect

Part 14 — Pareidolia, planning fallacy, post-purchase rationalization

Monday, July 5, 2010

Martian, Martian, Martian! (Part 14 of Cognitive Biases)

A cognitive bias is a pattern of deviation in judgment that occurs in particular situations, and boy howdy, are there a lot of them!

Here’s another installment of Cognitive Biases, this one brought to you by the range of Pa-Po. Pr-Pu will follow shortly.

(Marcia and Jan have only an indirect connection to what follows.)


On July 25, 1976, a camera aboard Viking 1 took a series of pictures of the Cydonia region of the planet Mars. Above, you see a photograph of a 1.2 mile long Cydonian mesa at 40.75° north latitude and 9.46° west latitude. Nothing special, right?

How about the picture below?

This is the famous “Face on Mars,” an example of the cognitive bias known as pareidolia, the tendency of the human brain to turn vague or random stimuli into objects of significance. Watching for patterns in clouds is an exercise in voluntary pareidolia. Some people overrate the significance of these patterns, especially when they see apparent religious imagery, like the infamous Virgin Mary grilled cheese sandwich or the Jesus tortilla.

When you look at a Rorschach inkblot, the images you see are the result of “directed pareidolia.” The blots are carefully designed not to resemble any object in particular, so that what you see is what you project. Pareidolia appears in sound as well. There’s a tendency to hear apparently meaningful words and phrases in a recording played backward. To me, the resemblance between the sound “Martian” and “Marcia” led to the Brady Bunch influenced title of this installment.

Planning Fallacy

In a 1994 study, 37 psychology students were asked to estimate how long it would take to finish their senior theses. The average estimate was 33.9 days. They also estimated how long it would take "if everything went as well as it possibly could" (averaging 27.4 days) and "if everything went as poorly as it possibly could" (averaging 48.6 days). The average actual completion time was 55.5 days, with only about 30% of the students completing their thesis in the amount of time they predicted.

The researchers asked their students for estimates of when they (the students) thought they would complete their personal academic projects, with 50%, 75%, and 99% confidence.

13% of subjects finished their project by the time they had assigned a 50% probability level;
19% finished by the time assigned a 75% probability level;
45% (less than half) finished by the time of their 99% probability level.

In project management, this is sometimes referred to as Hofstadter’s Law: It always takes longer than you expect, even when you take into account Hofstadter’s Law. (Douglas Hofstadter was the author of the 1979 work Gödel, Escher, Bach.) There are a number of theories as to why this is so often true. To my mind, the best explanation comes from Elihu Goldratt in his 1997 Critical Chain, which analyzed project management issues from a different perspective.

Goldratt argued that when asked to estimate task duration, people tended to give a safe estimate whenever possible. Knowing the estimate had safety built in, people then tended to procrastinate or attack other problems until the actual time available was insufficient to get the job done. This is also known as Parkinson’s Law, the tendency of work to expand to fill the time available for its completion.

Numerous books (including some of mine) try to point out solutions, but the problem persists.

Post-Purchase Rationalization

There’s the infamous story about the guy who accidentally dropped a quarter in an outhouse, so he pulled out a $20 bill and threw it in afterward. When asked why, he said, “If I’ve got to go down there, it had better be worth my while.”

Post-purchase rationalization is the bias that once you’ve invested significant time, money, or energy in something, you tend to think it was all worthwhile. In his brilliant 1984 book, The Psychology of Influence, Dr. Robert Cialdini cites several examples. Just after placing a bet at the racetrack, people are much more confident about their horse winning than they were before they placed the bet. Researchers staged thefts on a New York City beach to see if onlookers would risk themselves to stop the thefts. Four in twenty observers gave chase. Then they did it again, but now the supposed victim first asked the onlooker, “Would you watch my things?” Nineteen out of twenty people tried to stop the theft or catch the thief.

Most interestingly, when an attendee at a sales meeting for Transcendental Meditation raised a series of embarrassing questions that undermined the claims made by the presenter, enrollments went up, not down! One person who signed up told the observer that he agreed with the points, but needed help so much that the criticisms made him sign up now, before he had time to think about them and fail to join up.

There’s a value in consistency. Foolish consistency, as we recall, is the hobgoblin of little minds.

More to come…

Previous Installments

You can find the bias you’re interested in by clicking in the tag cloud on the right. To find all posts concerning cognitive biases, click the very big phrase.

Part 1 — Bias blind spot, confirmation bias, déformation professionnelle, denomination effect, moral credential effect.

Part 2 — Base rate fallacy, congruence bias, experimenter’s bias

Part 3 — Ambiguity aversion effect (Ellsberg paradox), choice-supportive bias, distinction bias, contrast effect

Part 4 — Actor-observer bias, anchoring effect, attentional bias, availability cascade, belief bias

Part 5 — Clustering illusion, conjunction fallacy, cryptomnesia

Part 6 — Disposition effect, egocentric bias, endowment effect, extraordinarity bias

Part 7 — False consensus effect, false memory, Forer effect, framing, fundamental attribution error

Part 8 — Gambler’s fallacy, halo effect

Part 9 — Hawthorne effect, herd instinct, hindsight bias, hyperbolic discounting

Part 10 — Illusion of asymmetric insight, illusion of control, illusory superiority, impact bias, information bias, ingroup bias, irrational escalation

Part 11 — Just-world phenomenon, loss aversion, ludic fallacy, mere exposure effect, money illusion

Part 12 — Need for closure, neglect of probability, “not-invented-here” (NIH) syndrome, notational bias

Part 13 — Observer-expectancy effect, omission bias, optimism bias, ostrich effect, outgroup homogeneity bias, overconfidence effect

Tuesday, June 29, 2010

Decisions, Decisions

“A decision,” wrote author Fletcher Knebel, “is what a man makes when he can’t find anybody to serve on a committee.”

Committees and teamwork are often an essential part of decision-making, but even in that framework, each of us must sooner or later take our stand, knowing full well the range of potential consequences. In an organization, a decision-making process must often be open and auditable. We must know not only the decision we make, but also the process that led us to that decision.

Decisions often require tradeoffs. A perfect solution may not exist. Each potential choice may have a downside, or may be fraught with risk. In some ways, making the least bad choice out of a set of poor alternatives takes greater skill and courage than making a conventionally “good” decision. Napoleon Bonaparte observed, “Nothing is more difficult, and therefore more precious, than being able to decide.”

The outcome of a decision, whether positive or negative, is not in itself proof of the decision’s quality, especially where probability is concerned. The odds may be dramatically in favor of a positive outcome, yet the dice may come up boxcars. Equally, if someone makes a stupid decision but gets lucky, the decision is no less stupid in spite of a good outcome. A good decision process improves our odds and results in the desired outcome the majority of the time.

There are two types of complexity in decision-making. The first, and most obvious, is the technical complexity of the issue and the tradeoffs that may be required. The second, though not always openly addressed, is the organizational complexity: the number of people involved, the number of departments or workgroups that must be consulted, the existing relationships that shape communication among people and groups, the organizational culture, and the pressure of the political process.

Decisions also vary in their importance. Importance can be measured in terms of the consequences of the decision and the constraints imposed on the decision process. Critical decisions fall into three categories:

Time critical decisions must be made within a narrow window of time
Safety critical decisions have the potential for injury or death
Business/Financial critical decisions can affect the future or funding of the organization

At different times in the decision-making process, consider the opportunities as well as the negative consequences that can result both from the decision to act and from the decision to wait. If the consequences of a missed opportunity are greater, then the appropriate bias is in the direction of action. If an inappropriate decision could cause greater harm, the bias should fall in the direction of delay: gather more information and reassess.

Threats and opportunities both require proactive management, but opportunities even more so. Good luck and bad luck operate differently. If a person, say, loses $100, it’s gone, and all the consequences of that loss flow automatically. If, on the other hand, there’s a $100 bill somewhere in the area, it’s possible to miss it, there is no requirement to pick it up, and no obligation to spend it wisely. Exploiting opportunity requires observation, initiative, and wisdom.

Decisions must reflect goals. A successful project outcome is not necessarily an organizationally desirable outcome. Project managers and technical professionals must consider wider factors. Sometimes the right organizational decision involves hampering or even destroying the project.

Less than ideal circumstances are typically the reality. If there were more money, if the policies were different, if procedures didn’t require this item, the decision frame would be different—and so, likely, would be the decision itself. Generally, technical professionals prefer an emphasis on getting the job done correctly over meeting the schedule, but organizational circumstances may compel the latter.

When teams are involved in the decision, team decision-making considerations come into play. Conflict is not only inevitable, but if managed properly, desirable. The goal is to reach a consensus, which is not necessarily 100% agreement but rather a decision all team members can live with.

Compare the actual to the intended. If there is a discrepancy, the crucial question is “Why?” Knowing the actual results, would the team have done better with a different process? Should the process for future decisions be modified? Is there a trend in outcomes, especially in bad outcomes? If so, there may be process issues.

Thoughts adapted from “Decision Making,” by Michael Dobson et al., in Applied Project Management for Space Systems (Space Technology Series), McGraw-Hill, 2008.

Tuesday, June 22, 2010

Heads I Win, Tails I Win (and the Same to You)

Negotiation is such a fundamental “threshold” skill that it’s nearly impossible for you to succeed long-term without developing skills in this area.  Unfortunately, many people get the wrong idea about what negotiation is and how works.

The distaste that some people feel for the concept of negotiation results from seeing negotiation as “win/lose” (I win, you lose) or “lose/win” (I give up rather than make an enemy out of you) rather than “win/win” (we both come out of the negotiation with our needs met).  In addition to moral or ethical qualms, the reality is that we leave someone unhappy, and that person is unlikely to forget.  We will have to deal with the leftover negativity at some future time.  “Win/win” approaches aren’t just nice, they’re necessary for our long-term relationships and performance.

But how is it possible to negotiate and have both parties win?

Understanding “win/win”

Negotiation isn’t simply about compromise (let’s just split it 50-50).  While sometimes a compromise solution in which each party gives a little bit is acceptable, often a compromise turns into “lose/lose.”

Roger Fisher and William Ury of the Harvard Negotiation Project point out that in many negotiations the participants see a “fixed pie,” but that it’s often possible to “expand the pie.”

They tell the story of “the proverbial sisters who quarreled over an orange.  After they finally agreed to divide the orange in half, the first sister took her half, ate the fruit, and threw away the peel, while the other threw away the fruit and used the peel from her half in baking a cake.”

In other words, “common sense” would suggest the orange could only be split in such a way that the parts added up to 100%, but this particular orange could have been split 100-100, not 50-50...because the two sisters had different yet complementary interests!

The “win/win” concept of negotiation emphasizes that preserving the relationship is an important goal in most negotiations, and that’s particularly crucial when the other participant in negotiation happens to be your boss.  You might be able to force your desires through his or her resistance, but you have to expect him or her to remember that in the future.  “If you wrong us,” Shylock says, “shall we not revenge?”

Win/win isn’t only ethically superior, it’s more practical as well.

Hard” vs. “soft” styles

You can make a lifetime study of negotiation, and it will benefit you in every area of your life.  It’s worth adding to your list of areas for personal and professional development, because you will ultimately find yourself in continual negotiation situations.  Negotiation styles are sometimes divided into “soft” and “hard,” but that’s not a very meaningful distinction.

The Fisher/Ury Getting to Yes techniques are sometimes referred to as “soft” because they involve collegiality and teamwork.  But even in a “hard” negotiation program such as Roger Dawson’s excellent The Secrets of Power Negotiating, you’ll find his commitment to “win/win” negotiation, “a) Never narrow negotiations down to just one issue.  b) Different people want different things.”

Some key principles of win/win negotiation

As you study negotiation skills, you’ll find that different authorities have certain specific detailed and tactical suggestions.  However, some general principles of effective negotiation are common to the various styles and strategies.

1. Do your homework.

Before negotiating anything with anybody, there are a couple of things you should do.
First, analyze your own goal, making sure that you focus on your interests (the reasons you want what you want) instead of only your positions (the specifics for which you’re asking.  The position of the sisters was that each wanted the orange.  To find the underlying interests, you focus on why.  Why do you want the orange?  What exactly would you do with it if you had it all?  What would not be useful or necessary for you?

Second, determine your bottom line.  What do you need--and what is the best you can do assuming that the negotiation goes nowhere?  You need to know this so you’ll know when you’re getting results...and so you won’t take an offer that’s less than what you’d get if there is no deal.  Fisher and Ury call this your “BATNA”:  your “best alternative to a negotiated agreement.”   Roger Dawson calls it “walk-away power.”

Third, put yourself in the shoes of the other person and do the same thing.  The more you understand the interests and goals of the other participant--and their own BATNA or walk-away options, the easier you’ll find it to locate win/win options.

2. Listen—for the real issues.

Being a good listener is a valuable negotiation technique for several reasons.  First, your understanding of the other person grows, which helps you in working toward the best outcome.  Second, when you listen, you automatically validate the other person, lowering their stress and emotions, and create a climate in which better results can occur.  Paraphrase what you’re being told to make sure you understand it fully.

3. Be persistent and patient.

You want to negotiate in order to achieve results for both parties.  Surrendering and giving in are examples of lose/win, not win/win strategies.  Keep your dignity and your personal strength intact by refusing to yield to hardball tactics and pressure.  One reason to study such tactics yourself is that it becomes easier to counter them in practice.

Being in a hurry to reach a deal often gives you a worse deal than you’d get with patience.  If a particular round of negotiation isn’t panning out successfully, maybe it’s time to walk away for now, think about what you’ve learned, and try again later.

4. Be clear and assertive.

You’ve heard it said, “If you don’t ask, you don’t get.”  That’s true even in cases where the other person isn’t necessarily hostile or negative to your interests.  If you don’t ask, there is a good chance the other person doesn’t even know what it is you want--and if he or she doesn’t know, how can you expect him or her to read your mind?  One of the most interesting elements of preparing well for a negotiation is how often you get your needs met without actually encountering the resistance you expected!

5. Allow face-saving.

When a negotiation or conflict situation ends up making one person be “in the wrong,” don’t be surprised if that person feels negative about it.  Being embarrassed or humiliated is not a positive emotion.  When you must show your boss that he or she is incorrect, or has made a mistake, or has make a bad decision, you not only have to get the situation corrected, you have to resolve the emotional issues in a way to allow your boss to “save face.”

Some techniques for face-saving include the “third party appeal,” in which you don’t say, “I’m right, you’re wrong,” but instead find a neutral third party (such as a reference book) that you’ll use to resolve the issue.  Another valuable technique is privacy.  It’s easier to admit to one person that one is wrong than admit it publicly to everyone.  (And never gloat afterward!)  A third is to find a way to allow the person to be partially right, or to allow yourself to be partially wrong.  (At least you can always allow for the possibility of improvement.)

You negotiate every day of your life and with all the people in your life.  Don’t wait until you are in a major conflict situation with the power dynamic stacked against you to develop this skill.

From Managing UP: 59 Ways to Build a Career-Advancing Relationship With Your Boss, by Michael and Deborah Singer Dobson (AMACOM, 2000). Copyright © 2000 Michael and Deborah Dobson. All Rights Reserved.

Tuesday, June 15, 2010

Paul is Dead and Sewell Avery is Stupid (Part 13 of Cognitive Biases)

It’s been a few weeks since the last installment of our survey of popular distortions in thought, perception, and decision-making. This installment is brought to you by The Story of O.

Observer-expectancy effect

In September 1969, Tim Harper, a student at Drake University in Des Moines, Iowa, published a humorously intended article in the campus newspaper, titled “Is Paul McCartney Dead?” The article listed a number of supposed reasons, including the claim that the surviving Beatles had planted backward messages in various songs.

About a month later, a caller to WKNR-FM in Detroit asked radio dj Russ Gibb about the rumor, asking him to play “Revolution 9” backwards. Gibb did, and heard the phrase “Turn me on, dead man.”

Or so he thought.

The “Paul is dead” story quickly got out of control, and any number of people (some not even stoned) started to pick up clues. Even statements from Paul himself were not enough to stop the story. There are still claims today that photographs of Paul pre-1966 and post-1966 show significant differences in facial structure.

We see what we expect to see. If we’re looking for a particular answer, the cognitive bias known as the observer-expectancy effect results in unconscious manipulation of experiments and data so that yes, indeed, we find what we were looking for.

The use of double-blind methodology in performing experiments is one way to control for the observer-expectancy effect. Try this thought experiment: if you are wrong, what would you expect to see differently?

Omission bias

You know an opponent of yours is allergic to a certain food. Before a big competition, you have an opportunity to do one of two things. Which, in your judgment, is less immoral?

  1. Slip some of the allergen in his or her food.
  2. Notice that the opponent has accidentally ordered food containing the allergen, and choose to say nothing.

A clear majority say the harmful action (1) is worse than the harmful inaction (2). The net result for the opponent is the same, of course. The reason is omission bias, the belief that harmful inaction is ethically superior to harmful action.

Part of the reinforcement of the bias is that it’s harder to judge motive in cases of omission. “I didn’t know he was allergic!” you might argue, and there’s a good chance you’ll get away with it. Every employee knows the technique of “malicious compliance,” whether or not we personally use it — that’s the tactic of applying an order or directive with such appalling literal-mindedness that you guarantee a disastrous result.

Even if no one else can judge your intent, you can. Don’t let the omission bias lead you into ethical choices you’ll later regret.

Optimism bias

Optimism bias is the tendency for people to be over-optimistic about the outcome of planned actions. Excessive optimism can result in cost overruns, benefit shortfalls, and delays when plans are implemented or expensive projects are built. In extreme cases these can result in defeats in military conflicts, ultimate failure of a project or economic bubbles such as market crashes.

A number of studies have found optimism bias in different kinds of judgment. These include:
  • Second-year MBA students overestimated the number of job offers they would receive and their starting salary.
  • Students overestimated the scores they would achieve on exams.
  • Almost all newlyweds in a US study expected their marriage to last a lifetime, even while aware of the divorce statistics.
  • Most smokers believe they are less at risk of developing smoking-related diseases than others who smoke.
Optimism bias can induce people to underinvest in primary and preventive care and other risk-reducing behaviors. Optimism bias affects criminals, who tend to misjudge the likelihood of experiencing legal consequences.

Optimism bias causes many people to grossly underestimate their odds of making a payment late. Companies have exploited this bias by increasing interest rates to punitive rates for any late payment, even if it is to another creditor. People subject to optimism bias think this won’t happen to them — but eventually it happens to almost everbody.

Optimism bias also causes many people to substantially underestimate the probability of having serious financial or liquidity problems, such as from a sudden job loss or severe illness. This can cause them to take on excessive debt under the expectation that they will do better than average in the future and be readily able to pay it off.

There’s a good side to optimism bias as well. Depressives tend to be more accurate and less overconfident in their assessments of the probabilities of good and bad events occurring to others, but they tend to overestimate the probability of bad events happening to them, making them risk-averse in self-destructive ways.

Ostrich effect

The optimism bias is linked to the ostrich effect, a common strategy of dealing with (especially financial) risk by pretending it doesn’t exist. Research has demonstrated that people look up the value of their investments 50-80% less often during bad markets.

Outcome bias

At the end of World War II, Montgomery Ward chairman Sewell Avery made a fateful decision. The United States, he was sure, would experience major difficulties moving from a wartime to a peacetime economy. Millions of troops would return, all seeking jobs. At the same time, factories geared for the production of tanks, bombers, and fighting ships would grind to a halt with no further need for their production.

Let Sears and JCPenney expand; Montgomery Ward would stand pat on its massive cash reserves (one Ward vice president famously said, “Wards is one of the finest banks with a storefront in the US today.”) and when the inevitable collapse came, Montgomery Ward would swallow its rivals at pennies on the dollar.

As we know, it didn’t turn out that way. Instead of falling back into depression, the United States in the postwar years saw unprecedented economic growth.

Sewell Avery was wrong. But was he stupid?

Outcome bias describes our tendency to judge the quality of the decision by the outcome: Sewell Avery was stupid. But that’s not fair. The outcome of the decision doesn’t by itself prove whether the decision was good or bad. Lottery tickets aren’t a good investment strategy. The net return is expected to be negative. On the other hand, occasionally someone wins. That doesn’t make them a genius. Wearing your seatbelt is a good idea. There are, alas, certain rare accidents in which a seatbelt could hamper your escape.

As it happens, Avery was stupid — not because he made a decision that turned out to be wrong, but because he stuck to it in the face of increasing evidence to the contrary, even firing people who brought him bad news. But that’s a different bias.

Outgroup homogeneity bias

In response to the claim that all black people look alike, comedian Redd Foxx performed a monologue that listed some thirty or forty different shades of black, set against the single color of white. “No, dear white friends,” Foxx said, “it is you who all look alike.”

The proper name for this perception (in all directions) is “outgroup homogeneity bias,” the tendency to see members of our own group as more varied than members of other groups. Interestingly, this turns out to be unrelated to the number of members of the other group we happen to know. The bias has been found even when groups interact frequently.

Overconfidence effect

One of the most solidly demonstrated cognitive biases is the “overconfidence effect,” the degree to which your personal confidence in the quality and accuracy of your own judgment is greater than the actual quality and accuracy. In one experiment, people were asked to rate their answers. People who rated their answers as 99% certain turned out to be wrong about 40% of the time.

The overconfidence gap is greatest when people are answering hard questions about unfamiliar topics. What’s your guess as to the total egg production of the United States? How confident are you in the guess you just made? (The average person expects an error rate of 2%, but the real error rate averages about 46%.)

Clinical psychologists turn out to have a high margin of overconfidence.

Weather forecasters, on the contrary, have none.

Previous Installments

Part 1 — Bias blind spot, confirmation bias, déformation professionnelle, denomination effect, moral credential effect.

Part 2 — Base rate fallacy, congruence bias, experimenter’s bias

Part 3 — Ambiguity aversion effect (Ellsberg paradox), choice-supportive bias, distinction bias, contrast effect

Part 4 — Actor-observer bias, anchoring effect, attentional bias, availability cascade, belief bias

Part 5 — Clustering illusion, conjunction fallacy, cryptomnesia

Part 6 — Disposition effect, egocentric bias, endowment effect, extraordinarity bias

Part 7 — False consensus effect, false memory, Forer effect, framing, fundamental attribution error

Part 8 — Gambler’s fallacy, halo effect

Part 9 — Hawthorne effect, herd instinct, hindsight bias, hyperbolic discounting

Part 10 — Illusion of asymmetric insight, illusion of control, illusory superiority, impact bias, information bias, ingroup bias, irrational escalation

Part 11 — Just-world phenomenon, loss aversion, ludic fallacy, mere exposure effect, money illusion

Part 12 — Need for closure, neglect of probability, “not-invented-here” (NIH) syndrome, notational bias

Tuesday, June 8, 2010

Failure *Is* An Option!

You probably won’t see the American Movie Classics channel run a festival of ‘‘Great Project Management Movies’’ any time soon, but if they did, Ron Howard’s motion picture Apollo 13, based on the real-life story, would be a natural candidate. Faced with a potentially disastrous accident, project teams overcome one potentially fatal barrier after another to bring the crew safely back to Earth, guided by mission director Gene Krantz’s mantra: ‘‘Failure is not an option.’’

But of course, failure is an option. Sometimes, it looks like the most likely option of all.

The odds in the actual Apollo 13 disaster were stacked against a happy outcome, and everyone—including Gene Krantz—had to be well aware of that fact. One of the key scenes in the movie involves a team of engineers trying to figure out how to rig a CO2 filter out of miscellaneous junk.

  • The time constraint: before the CO2 levels overwhelm the astronauts. 
  • The performance goal: to work well enough to let the astronauts breathe during the long trip home. 
  • The budget: the junk on the table. 

And no one knows whether it’s even possible.

How do you balance the value of realism against the value of optimism in solving problems?

One way is to reject the false dilemma the question poses. Failure is not only an option, it’s a gateway to success ... if you fail in the right dimension.

If there is a trade-off to be made between the time constraint and the performance criteria, we know that ultimate failure—the death of the Apollo 13 astronauts—comes most rapidly from failure to meet the time constraint. That is, if we build a perfect CO2 filter, but we finish it too late, we’ve still failed. Perfect performance does not compensate for a failed deadline.

But wait! Why isn’t the reverse equally true? If you fail to meet the performance criteria, isn’t it irrelevant how quickly you fail to do so? Actually, it depends on the extent of the failure.

To illustrate, let’s look at this scenario: You’ve managed to come up with an inefficient partial solution that will last only half as long as it’s going to take to get the astronauts back home, but you’ve done so within the original time constraint. Do you take this solution? Absolutely!

Although you have failed to make the performance goal for the project within the original time constraint, you’ve reset the game clock. With a day or more to work instead of mere hours, your chance of finding a solution that solves the remainder of the problem has become that much more possible.

The right kind of failure is not only an option, but sometimes a desirable one. In this project, we can’t accept a failure to meet the time constraint, but we can live with a partial performance failure and stay in this game.

This piece was written for Federal PM Focus, a newsletter published by Management Concepts. Click the title above to register for a free 30-day trial.

Adapted with permission from The Six Dimensions of Project Management: Turning Constraints into Resources, by Michael Dobson and Heidi Feickert, © 2007 by Management Concepts, Inc.  All rights reserved. www.managementconcepts.com/pubs

Tuesday, May 25, 2010

Understanding Politics (Office and Otherwise)

“Politics, n.  A strife of interests masquerading as a contest of principles.”

-Ambrose Bierce, The Devil’s Dictionary, 1906 -

Take this test to see if you have office politics in your organization.

        1) Count the employees.

        2)  Does the number exceed 3?

        3)  If the answer to #2 is “Yes,” you definitely have office politics.

Within this concept of politics you can play many different ways for many different goals.  Some tactics are unethical, others ethical.  Some goals are unethical, others ethical.  You must still use office politics as the vehicle to achieve your goals, because it’s the ultimate arena in which the necessary decisions and consensus will be made.

Most definitions of politics (or of any controversial topic, for that matter) reflect the moral outlook of the definer.  The American Heritage Dictionary, for example, describes a politician as “one who is interested in personal or partisan gain and other selfish interests” and politics as “partisan or factional intrigue within a given group.”  But the root word “politic, ” from Chambers Concise 20th Century Dictionary, means “in accordance with good policy:  acting or proceeding from motives of policy:  prudent: discreet.”


Consider these truisms about people and organizations:

  • People have principled disagreements about policy and direction of the organization.
  • People have different visions and goals.
  • People have different personal and selfish interests.
  • People have egos and like them recognized and stroked.
  • People have different personalities that others react to in different ways.
  • People remember past actions and behaviors.

There’s nothing very radical, nor inherently unprincipled or evil, in these statements; most people will easily acknowledge their truth:  that people don’t check their humanity at the door when they punch in on the time clock.


The second truism to consider is the concept of scarcity.  From the days of the pyramids to the present, every organization, company, or government has lived with the reality that there are far more desirable projects and activities than there are resources to manage them.  In other words, work is infinite but resources are finite.

Every time senior management gives you a dollar, or a person, or a week, it becomes a dollar, a person, or a week they can’t give to someone else for something that also has value.  (In financial terms, this is known as “opportunity cost.”)  That sets up an unavoidable competition, as we each strive to get the resources we need to accomplish our objectives, and the playing out of the informal competition is what we know as office politics.  And if our organization is under stress or financial challenge, the struggle gets that much worse.

It also gets worse when what’s at stake is competition for access to status, which is also a kind of limited resource.  For most people, when their personal status is at stake, the kid gloves come off.

Office Politics Defined  

This leads us to the following operational definition of politics:

Politics (\ˈpä-lə-ˌtiks\):  The informal and sometimes emotion-driven process of allocating limited resources and working out goals, decisions, and actions in an environment of people with different and competing interests and personalities.

This definition is intentionally neutral, as simply descriptive as we can make it.  It helps us understand what we’re about.  Here are the key points of this definition amplified:
  • informal and sometimes emotion-driven.  Office politics is separate from the formal organizational structure and involves human dynamics and emotions in addition to facts and reason.
  • allocating limited resources.  The ultimate outcome of office politics—and how success and failure are measured—is how the organization’s resources—time, money, people—are allocated.
  • working out goals, decisions, and actions.  The purpose of office politics is to work out goals, decisions, and actions that can turn into reality.  This often involves negotiation, compromise, and application of power.
  • different and competing interests and personalities.  People have different ideas and desires about what should be done, some based on reason and analysis, some based on emotion or personal agenda.  Personal likes and dislikes inevitably affect decisions.

There's no point in getting too wrapped up about the negative side of politics; it's inevitable wherever human beings gather together. Instead, it's better to learn how to play in a principled, positive, and above all effective manner.

Adapted from Enlightened Office Politics by Michael and Deborah Singer Dobson (AMACOM, 2001).

Tuesday, May 18, 2010

Career Proofing

Today's SideWise Insight is a reminder that in uncertain times, it's always wise to "career-proof" your job environment. This piece is focused on Federal employees, but there's good advice here no matter what field you're in.

In the career services field, it’s not uncommon to meet people who haven’t looked for a job in literally decades. They are long out of practice, their skills are rusty, and it’s often a desperate job situation that has driven them to this extreme.  Such people often have a difficult time in career transitions, even though they often have tremendous skills and experience to offer.

A job isn’t a marriage, so continuing to date a little on the side isn’t cheating.  The job, let us note, has not made a marriage commitment to you.  It is a professional relationship that may be very pleasant and very successful, but it is situational and capable of being set aside by either party in the event circumstances change.

Job hunting is not just the narrow activity of sending out résumés in response to vacancy announcements and going on interviews when asked.  After all, at certain points in your career, you may not actually want another job.  Should you still job-hunt?  Absolutely yes, although the job-hunting activities a career-proofing strategist uses in such a case may be different.  You don’t want to be caught unprepared in case of an emergency, and you don’t want to miss that perfect opportunity when it reveals itself.

Keep up with the market.  Whether you are actively seeking a new position or not, you should always stay up to date with what’s actually going on in your field, in your agency, in your market, in your private sector equivalent, and in other agencies that employ similar specialists.  What newspapers or newsletters cover your areas?  What websites or professional organizations or other resources can you follow that will give you the news?

Read postings in your field; track job offerings in various agencies and programs to see what kinds of skills are in demand, and what kinds of skills are on the wane.  That’s often a good leading indicator of the kinds of skills you should work at acquiring.

Check whether Federal publications are released covering agencies and programs that concern you.  Monitor Congressional committees whose work impacts your agencies or programs.  Which lobbying groups or public interest groups focus on issues that have an impact on your field?  (Don’t only study the side with which you may sympathize; learn about the other side(s) as well.)

Remember that “lunch is a verb,” and use some of your lunch hours as networking opportunities.  Try to expand the number of people with whom you lunch at least occasionally just to trade general gossip about what’s going on.  While you don’t share confidential information, of course, there’s a great deal of general information that can be shared quite appropriately.

As you develop your sources and your network, you’ll find yourself increasingly in demand as someone “in the know,” and knowledge is an important ingredient in practical political power.  What is most interesting is how it makes you more effective at your job and in your organization as well.  In-the-know people understand the bigger picture.  They tend to have influence.  They can get things done.  They often panic less when bad news (or bad rumors) happen.  And valuable news sources are valuable to their bosses and others in the organizational hierarchy.

Adapted from “Federal Career Development: A Strategy Guide,” by Michael and Deborah Singer Dobson, from The Federal Résumé Guidebook: Second Edition, by Kathryn Kraemer Troutman (JISTWorks, 1999; article copyright © 1999 Michael Dobson). Current editions are available from The Résumé Place at http://www.resume-place.com/books/.

Tuesday, May 11, 2010

Why We Work With Jerks

BRITANNUS (shocked). “Caesar, this is not proper.”

THEODOTUS (outraged). “How!”

CAESAR (recovering his self-possession). “Pardon him, Theodotus: he is a barbarian, and thinks the customs of his tribe and island are the laws of nature.
— George Bernard Shaw, Caesar and Cleopatra, Act II, (1900)

If you're surrounded by difficult people, you may wonder why the organization doesn't take more of a lead in dealing with difficult people in its midst?

Surely the costs of inappropriate behavior should compel the organization to action—and yet it’s seldom the case that the organization acts except in the most egregious of situations.

Sometimes it’s because the organization itself is part of the problem.

You may have noticed that a lot of people describe their office environment as if it’s a war zone. We take flak, someone gets shot down, the boss is out for blood, someone’s getting the ax—it’s a pretty violent place. And, of course, some people work in offices that are all too reminiscent of a war zone.

There’s the official corporate culture and the real culture. The official culture is usually embodied in a vision or mission statement: “We value honesty, diversity, and hard work.” If in fact people are praised and rewarded for honesty, diversity, and hard work, then the match between the official culture and the real culture is close.

But sometimes there’s a mismatch. If honesty is punished, diversity nonexistent, and nepotism is rampant, then the official culture isn’t real. You often still need to give lip service to the official version, but the real culture is reflected in the behavior you actually witness day in and day out.

Look at the very top of the organization. Does verbal abuse start there and go down through the ranks? If so, it’s hardly surprising to see the same behavior reflected in middle managers.

If you are part of an organization whose culture rewards difficult behavior, your attempts to modify the behavior will be less effective, and may not work at all.

If that’s the case, your options are limited. Depending on your organizational rank and power, you may be able to force a change in the corporate culture.

If the difficult behavior violates laws against harassment and discrimination, you may be able to force change even if you’re in a lower-ranking position. Be extremely careful with the threat of legal pressure. Even if you succeed in forcing the organizational change, you may suffer negative career consequences. It’s all too common for other people—not you—to reap the benefit for such a sacrifice.

If you can’t change the culture, or the cost of forcing change is unacceptable, the two remaining choices are (a) learn to live with it and (b) get out. If your decision is to leave, prepare your exit carefully. If you’re going to stay, make sure the consequences to your mental health and happiness are within an acceptable range.

One word of caution: If you’ve generally had good working relationships and this job is poisonous, it’s probably them.

If, on the other hand, you encounter the same difficult behavior over and over again, it’s probably you.

From: Work Smart: Dealing With Difficult People (2nd ed.), William Lundin, Ph.D., Kathleen Lundin, and Michael S. Dobson (AMACOM, 2009)

Tuesday, May 4, 2010

Highly Motivated

Ever met an unmotivated person?

Think again. If someone spends more time and energy each day scheming to get out of work, they're motivated, all right. Just not in the direction you would prefer.

If someone isn’t helping you to achieve your goals, there are three possible reasons.

a) Ignorance: They don’t know what you want.
b) Inability: They can’t do what you want.
c) Choice: They won’t do what you want.

Ignorance. If people don’t know what you want, the problem isn’t with them; it’s with you. Even if you know you’ve told them, don’t assume the message has really gotten across. You may not have been as clear as you could have been, and they may not have been listening as well as they might have. It's always a good idea to check to make sure people really do know what you want. If that's the only thing standing in the way of their action, you're done.

Inability. When we say “can’t do,” it’s a literal “can’t do”: if we offered a million dollar bounty, nothing would change. “Can’t do” situations can sometimes be fixed by training, by access to necessary resources, by going to a different person, or by altering your request. There’s nothing personal here, but merely a problem. You can fix it or you can’t.

Choice. If someone knows what you want and can do it, then they get to make a choice about whether to do it. Why would they choose not to do it? Again, three reasons:

a) Performance is punished
b) Failure is rewarded
c) Performance doesn’t matter

Watch out for “perverse incentives,” ways in which we inadvertently push people in the direction of the very behavior we want them to avoid. If you get rewarded for doing a great job with even more work, perhaps that great job isn’t completely in your own interest. If failure to exceed your quota for the month gets you better liked by your colleagues, and there’s not much consequence from management, failure may give you the greatest personal reward. If you think no one reads or cares about that weekly report, it doesn’t seem to matter much if you do it well or poorly.

When you’re in a leadership role or simply need help and cooperation from your colleagues, try to find out why they’re behaving as they do. If you know whether their behavior is a choice or not, you can pick the best strategy for getting results.

From Work Smart: Goal Setting (2nd edition), by Susan B. Wilson and Michael Dobson.