Tuesday, June 29, 2010

Decisions, Decisions

“A decision,” wrote author Fletcher Knebel, “is what a man makes when he can’t find anybody to serve on a committee.”

Committees and teamwork are often an essential part of decision-making, but even in that framework, each of us must sooner or later take our stand, knowing full well the range of potential consequences. In an organization, a decision-making process must often be open and auditable. We must know not only the decision we make, but also the process that led us to that decision.

Decisions often require tradeoffs. A perfect solution may not exist. Each potential choice may have a downside, or may be fraught with risk. In some ways, making the least bad choice out of a set of poor alternatives takes greater skill and courage than making a conventionally “good” decision. Napoleon Bonaparte observed, “Nothing is more difficult, and therefore more precious, than being able to decide.”

The outcome of a decision, whether positive or negative, is not in itself proof of the decision’s quality, especially where probability is concerned. The odds may be dramatically in favor of a positive outcome, yet the dice may come up boxcars. Equally, if someone makes a stupid decision but gets lucky, the decision is no less stupid in spite of a good outcome. A good decision process improves our odds and results in the desired outcome the majority of the time.

There are two types of complexity in decision-making. The first, and most obvious, is the technical complexity of the issue and the tradeoffs that may be required. The second, though not always openly addressed, is the organizational complexity: the number of people involved, the number of departments or workgroups that must be consulted, the existing relationships that shape communication among people and groups, the organizational culture, and the pressure of the political process.

Decisions also vary in their importance. Importance can be measured in terms of the consequences of the decision and the constraints imposed on the decision process. Critical decisions fall into three categories:

Time critical decisions must be made within a narrow window of time
Safety critical decisions have the potential for injury or death
Business/Financial critical decisions can affect the future or funding of the organization

At different times in the decision-making process, consider the opportunities as well as the negative consequences that can result both from the decision to act and from the decision to wait. If the consequences of a missed opportunity are greater, then the appropriate bias is in the direction of action. If an inappropriate decision could cause greater harm, the bias should fall in the direction of delay: gather more information and reassess.

Threats and opportunities both require proactive management, but opportunities even more so. Good luck and bad luck operate differently. If a person, say, loses $100, it’s gone, and all the consequences of that loss flow automatically. If, on the other hand, there’s a $100 bill somewhere in the area, it’s possible to miss it, there is no requirement to pick it up, and no obligation to spend it wisely. Exploiting opportunity requires observation, initiative, and wisdom.

Decisions must reflect goals. A successful project outcome is not necessarily an organizationally desirable outcome. Project managers and technical professionals must consider wider factors. Sometimes the right organizational decision involves hampering or even destroying the project.

Less than ideal circumstances are typically the reality. If there were more money, if the policies were different, if procedures didn’t require this item, the decision frame would be different—and so, likely, would be the decision itself. Generally, technical professionals prefer an emphasis on getting the job done correctly over meeting the schedule, but organizational circumstances may compel the latter.

When teams are involved in the decision, team decision-making considerations come into play. Conflict is not only inevitable, but if managed properly, desirable. The goal is to reach a consensus, which is not necessarily 100% agreement but rather a decision all team members can live with.

Compare the actual to the intended. If there is a discrepancy, the crucial question is “Why?” Knowing the actual results, would the team have done better with a different process? Should the process for future decisions be modified? Is there a trend in outcomes, especially in bad outcomes? If so, there may be process issues.

Thoughts adapted from “Decision Making,” by Michael Dobson et al., in Applied Project Management for Space Systems (Space Technology Series), McGraw-Hill, 2008.

Tuesday, June 22, 2010

Heads I Win, Tails I Win (and the Same to You)

Negotiation is such a fundamental “threshold” skill that it’s nearly impossible for you to succeed long-term without developing skills in this area.  Unfortunately, many people get the wrong idea about what negotiation is and how works.

The distaste that some people feel for the concept of negotiation results from seeing negotiation as “win/lose” (I win, you lose) or “lose/win” (I give up rather than make an enemy out of you) rather than “win/win” (we both come out of the negotiation with our needs met).  In addition to moral or ethical qualms, the reality is that we leave someone unhappy, and that person is unlikely to forget.  We will have to deal with the leftover negativity at some future time.  “Win/win” approaches aren’t just nice, they’re necessary for our long-term relationships and performance.

But how is it possible to negotiate and have both parties win?

Understanding “win/win”

Negotiation isn’t simply about compromise (let’s just split it 50-50).  While sometimes a compromise solution in which each party gives a little bit is acceptable, often a compromise turns into “lose/lose.”

Roger Fisher and William Ury of the Harvard Negotiation Project point out that in many negotiations the participants see a “fixed pie,” but that it’s often possible to “expand the pie.”

They tell the story of “the proverbial sisters who quarreled over an orange.  After they finally agreed to divide the orange in half, the first sister took her half, ate the fruit, and threw away the peel, while the other threw away the fruit and used the peel from her half in baking a cake.”

In other words, “common sense” would suggest the orange could only be split in such a way that the parts added up to 100%, but this particular orange could have been split 100-100, not 50-50...because the two sisters had different yet complementary interests!

The “win/win” concept of negotiation emphasizes that preserving the relationship is an important goal in most negotiations, and that’s particularly crucial when the other participant in negotiation happens to be your boss.  You might be able to force your desires through his or her resistance, but you have to expect him or her to remember that in the future.  “If you wrong us,” Shylock says, “shall we not revenge?”

Win/win isn’t only ethically superior, it’s more practical as well.

Hard” vs. “soft” styles

You can make a lifetime study of negotiation, and it will benefit you in every area of your life.  It’s worth adding to your list of areas for personal and professional development, because you will ultimately find yourself in continual negotiation situations.  Negotiation styles are sometimes divided into “soft” and “hard,” but that’s not a very meaningful distinction.

The Fisher/Ury Getting to Yes techniques are sometimes referred to as “soft” because they involve collegiality and teamwork.  But even in a “hard” negotiation program such as Roger Dawson’s excellent The Secrets of Power Negotiating, you’ll find his commitment to “win/win” negotiation, “a) Never narrow negotiations down to just one issue.  b) Different people want different things.”

Some key principles of win/win negotiation

As you study negotiation skills, you’ll find that different authorities have certain specific detailed and tactical suggestions.  However, some general principles of effective negotiation are common to the various styles and strategies.

1. Do your homework.

Before negotiating anything with anybody, there are a couple of things you should do.
First, analyze your own goal, making sure that you focus on your interests (the reasons you want what you want) instead of only your positions (the specifics for which you’re asking.  The position of the sisters was that each wanted the orange.  To find the underlying interests, you focus on why.  Why do you want the orange?  What exactly would you do with it if you had it all?  What would not be useful or necessary for you?

Second, determine your bottom line.  What do you need--and what is the best you can do assuming that the negotiation goes nowhere?  You need to know this so you’ll know when you’re getting results...and so you won’t take an offer that’s less than what you’d get if there is no deal.  Fisher and Ury call this your “BATNA”:  your “best alternative to a negotiated agreement.”   Roger Dawson calls it “walk-away power.”

Third, put yourself in the shoes of the other person and do the same thing.  The more you understand the interests and goals of the other participant--and their own BATNA or walk-away options, the easier you’ll find it to locate win/win options.

2. Listen—for the real issues.

Being a good listener is a valuable negotiation technique for several reasons.  First, your understanding of the other person grows, which helps you in working toward the best outcome.  Second, when you listen, you automatically validate the other person, lowering their stress and emotions, and create a climate in which better results can occur.  Paraphrase what you’re being told to make sure you understand it fully.

3. Be persistent and patient.

You want to negotiate in order to achieve results for both parties.  Surrendering and giving in are examples of lose/win, not win/win strategies.  Keep your dignity and your personal strength intact by refusing to yield to hardball tactics and pressure.  One reason to study such tactics yourself is that it becomes easier to counter them in practice.

Being in a hurry to reach a deal often gives you a worse deal than you’d get with patience.  If a particular round of negotiation isn’t panning out successfully, maybe it’s time to walk away for now, think about what you’ve learned, and try again later.

4. Be clear and assertive.

You’ve heard it said, “If you don’t ask, you don’t get.”  That’s true even in cases where the other person isn’t necessarily hostile or negative to your interests.  If you don’t ask, there is a good chance the other person doesn’t even know what it is you want--and if he or she doesn’t know, how can you expect him or her to read your mind?  One of the most interesting elements of preparing well for a negotiation is how often you get your needs met without actually encountering the resistance you expected!

5. Allow face-saving.

When a negotiation or conflict situation ends up making one person be “in the wrong,” don’t be surprised if that person feels negative about it.  Being embarrassed or humiliated is not a positive emotion.  When you must show your boss that he or she is incorrect, or has made a mistake, or has make a bad decision, you not only have to get the situation corrected, you have to resolve the emotional issues in a way to allow your boss to “save face.”

Some techniques for face-saving include the “third party appeal,” in which you don’t say, “I’m right, you’re wrong,” but instead find a neutral third party (such as a reference book) that you’ll use to resolve the issue.  Another valuable technique is privacy.  It’s easier to admit to one person that one is wrong than admit it publicly to everyone.  (And never gloat afterward!)  A third is to find a way to allow the person to be partially right, or to allow yourself to be partially wrong.  (At least you can always allow for the possibility of improvement.)

You negotiate every day of your life and with all the people in your life.  Don’t wait until you are in a major conflict situation with the power dynamic stacked against you to develop this skill.

From Managing UP: 59 Ways to Build a Career-Advancing Relationship With Your Boss, by Michael and Deborah Singer Dobson (AMACOM, 2000). Copyright © 2000 Michael and Deborah Dobson. All Rights Reserved.

Tuesday, June 15, 2010

Paul is Dead and Sewell Avery is Stupid (Part 13 of Cognitive Biases)

It’s been a few weeks since the last installment of our survey of popular distortions in thought, perception, and decision-making. This installment is brought to you by The Story of O.

Observer-expectancy effect

In September 1969, Tim Harper, a student at Drake University in Des Moines, Iowa, published a humorously intended article in the campus newspaper, titled “Is Paul McCartney Dead?” The article listed a number of supposed reasons, including the claim that the surviving Beatles had planted backward messages in various songs.

About a month later, a caller to WKNR-FM in Detroit asked radio dj Russ Gibb about the rumor, asking him to play “Revolution 9” backwards. Gibb did, and heard the phrase “Turn me on, dead man.”

Or so he thought.

The “Paul is dead” story quickly got out of control, and any number of people (some not even stoned) started to pick up clues. Even statements from Paul himself were not enough to stop the story. There are still claims today that photographs of Paul pre-1966 and post-1966 show significant differences in facial structure.

We see what we expect to see. If we’re looking for a particular answer, the cognitive bias known as the observer-expectancy effect results in unconscious manipulation of experiments and data so that yes, indeed, we find what we were looking for.

The use of double-blind methodology in performing experiments is one way to control for the observer-expectancy effect. Try this thought experiment: if you are wrong, what would you expect to see differently?

Omission bias

You know an opponent of yours is allergic to a certain food. Before a big competition, you have an opportunity to do one of two things. Which, in your judgment, is less immoral?

  1. Slip some of the allergen in his or her food.
  2. Notice that the opponent has accidentally ordered food containing the allergen, and choose to say nothing.

A clear majority say the harmful action (1) is worse than the harmful inaction (2). The net result for the opponent is the same, of course. The reason is omission bias, the belief that harmful inaction is ethically superior to harmful action.

Part of the reinforcement of the bias is that it’s harder to judge motive in cases of omission. “I didn’t know he was allergic!” you might argue, and there’s a good chance you’ll get away with it. Every employee knows the technique of “malicious compliance,” whether or not we personally use it — that’s the tactic of applying an order or directive with such appalling literal-mindedness that you guarantee a disastrous result.

Even if no one else can judge your intent, you can. Don’t let the omission bias lead you into ethical choices you’ll later regret.

Optimism bias

Optimism bias is the tendency for people to be over-optimistic about the outcome of planned actions. Excessive optimism can result in cost overruns, benefit shortfalls, and delays when plans are implemented or expensive projects are built. In extreme cases these can result in defeats in military conflicts, ultimate failure of a project or economic bubbles such as market crashes.

A number of studies have found optimism bias in different kinds of judgment. These include:
  • Second-year MBA students overestimated the number of job offers they would receive and their starting salary.
  • Students overestimated the scores they would achieve on exams.
  • Almost all newlyweds in a US study expected their marriage to last a lifetime, even while aware of the divorce statistics.
  • Most smokers believe they are less at risk of developing smoking-related diseases than others who smoke.
Optimism bias can induce people to underinvest in primary and preventive care and other risk-reducing behaviors. Optimism bias affects criminals, who tend to misjudge the likelihood of experiencing legal consequences.

Optimism bias causes many people to grossly underestimate their odds of making a payment late. Companies have exploited this bias by increasing interest rates to punitive rates for any late payment, even if it is to another creditor. People subject to optimism bias think this won’t happen to them — but eventually it happens to almost everbody.

Optimism bias also causes many people to substantially underestimate the probability of having serious financial or liquidity problems, such as from a sudden job loss or severe illness. This can cause them to take on excessive debt under the expectation that they will do better than average in the future and be readily able to pay it off.

There’s a good side to optimism bias as well. Depressives tend to be more accurate and less overconfident in their assessments of the probabilities of good and bad events occurring to others, but they tend to overestimate the probability of bad events happening to them, making them risk-averse in self-destructive ways.

Ostrich effect

The optimism bias is linked to the ostrich effect, a common strategy of dealing with (especially financial) risk by pretending it doesn’t exist. Research has demonstrated that people look up the value of their investments 50-80% less often during bad markets.

Outcome bias

At the end of World War II, Montgomery Ward chairman Sewell Avery made a fateful decision. The United States, he was sure, would experience major difficulties moving from a wartime to a peacetime economy. Millions of troops would return, all seeking jobs. At the same time, factories geared for the production of tanks, bombers, and fighting ships would grind to a halt with no further need for their production.

Let Sears and JCPenney expand; Montgomery Ward would stand pat on its massive cash reserves (one Ward vice president famously said, “Wards is one of the finest banks with a storefront in the US today.”) and when the inevitable collapse came, Montgomery Ward would swallow its rivals at pennies on the dollar.

As we know, it didn’t turn out that way. Instead of falling back into depression, the United States in the postwar years saw unprecedented economic growth.

Sewell Avery was wrong. But was he stupid?

Outcome bias describes our tendency to judge the quality of the decision by the outcome: Sewell Avery was stupid. But that’s not fair. The outcome of the decision doesn’t by itself prove whether the decision was good or bad. Lottery tickets aren’t a good investment strategy. The net return is expected to be negative. On the other hand, occasionally someone wins. That doesn’t make them a genius. Wearing your seatbelt is a good idea. There are, alas, certain rare accidents in which a seatbelt could hamper your escape.

As it happens, Avery was stupid — not because he made a decision that turned out to be wrong, but because he stuck to it in the face of increasing evidence to the contrary, even firing people who brought him bad news. But that’s a different bias.

Outgroup homogeneity bias

In response to the claim that all black people look alike, comedian Redd Foxx performed a monologue that listed some thirty or forty different shades of black, set against the single color of white. “No, dear white friends,” Foxx said, “it is you who all look alike.”

The proper name for this perception (in all directions) is “outgroup homogeneity bias,” the tendency to see members of our own group as more varied than members of other groups. Interestingly, this turns out to be unrelated to the number of members of the other group we happen to know. The bias has been found even when groups interact frequently.

Overconfidence effect

One of the most solidly demonstrated cognitive biases is the “overconfidence effect,” the degree to which your personal confidence in the quality and accuracy of your own judgment is greater than the actual quality and accuracy. In one experiment, people were asked to rate their answers. People who rated their answers as 99% certain turned out to be wrong about 40% of the time.

The overconfidence gap is greatest when people are answering hard questions about unfamiliar topics. What’s your guess as to the total egg production of the United States? How confident are you in the guess you just made? (The average person expects an error rate of 2%, but the real error rate averages about 46%.)

Clinical psychologists turn out to have a high margin of overconfidence.

Weather forecasters, on the contrary, have none.

Previous Installments

Part 1 — Bias blind spot, confirmation bias, déformation professionnelle, denomination effect, moral credential effect.

Part 2 — Base rate fallacy, congruence bias, experimenter’s bias

Part 3 — Ambiguity aversion effect (Ellsberg paradox), choice-supportive bias, distinction bias, contrast effect

Part 4 — Actor-observer bias, anchoring effect, attentional bias, availability cascade, belief bias

Part 5 — Clustering illusion, conjunction fallacy, cryptomnesia

Part 6 — Disposition effect, egocentric bias, endowment effect, extraordinarity bias

Part 7 — False consensus effect, false memory, Forer effect, framing, fundamental attribution error

Part 8 — Gambler’s fallacy, halo effect

Part 9 — Hawthorne effect, herd instinct, hindsight bias, hyperbolic discounting

Part 10 — Illusion of asymmetric insight, illusion of control, illusory superiority, impact bias, information bias, ingroup bias, irrational escalation

Part 11 — Just-world phenomenon, loss aversion, ludic fallacy, mere exposure effect, money illusion

Part 12 — Need for closure, neglect of probability, “not-invented-here” (NIH) syndrome, notational bias

Tuesday, June 8, 2010

Failure *Is* An Option!

You probably won’t see the American Movie Classics channel run a festival of ‘‘Great Project Management Movies’’ any time soon, but if they did, Ron Howard’s motion picture Apollo 13, based on the real-life story, would be a natural candidate. Faced with a potentially disastrous accident, project teams overcome one potentially fatal barrier after another to bring the crew safely back to Earth, guided by mission director Gene Krantz’s mantra: ‘‘Failure is not an option.’’

But of course, failure is an option. Sometimes, it looks like the most likely option of all.

The odds in the actual Apollo 13 disaster were stacked against a happy outcome, and everyone—including Gene Krantz—had to be well aware of that fact. One of the key scenes in the movie involves a team of engineers trying to figure out how to rig a CO2 filter out of miscellaneous junk.

  • The time constraint: before the CO2 levels overwhelm the astronauts. 
  • The performance goal: to work well enough to let the astronauts breathe during the long trip home. 
  • The budget: the junk on the table. 

And no one knows whether it’s even possible.

How do you balance the value of realism against the value of optimism in solving problems?

One way is to reject the false dilemma the question poses. Failure is not only an option, it’s a gateway to success ... if you fail in the right dimension.

If there is a trade-off to be made between the time constraint and the performance criteria, we know that ultimate failure—the death of the Apollo 13 astronauts—comes most rapidly from failure to meet the time constraint. That is, if we build a perfect CO2 filter, but we finish it too late, we’ve still failed. Perfect performance does not compensate for a failed deadline.

But wait! Why isn’t the reverse equally true? If you fail to meet the performance criteria, isn’t it irrelevant how quickly you fail to do so? Actually, it depends on the extent of the failure.

To illustrate, let’s look at this scenario: You’ve managed to come up with an inefficient partial solution that will last only half as long as it’s going to take to get the astronauts back home, but you’ve done so within the original time constraint. Do you take this solution? Absolutely!

Although you have failed to make the performance goal for the project within the original time constraint, you’ve reset the game clock. With a day or more to work instead of mere hours, your chance of finding a solution that solves the remainder of the problem has become that much more possible.

The right kind of failure is not only an option, but sometimes a desirable one. In this project, we can’t accept a failure to meet the time constraint, but we can live with a partial performance failure and stay in this game.

This piece was written for Federal PM Focus, a newsletter published by Management Concepts. Click the title above to register for a free 30-day trial.

Adapted with permission from The Six Dimensions of Project Management: Turning Constraints into Resources, by Michael Dobson and Heidi Feickert, © 2007 by Management Concepts, Inc.  All rights reserved. www.managementconcepts.com/pubs