Sunday, January 31, 2010

Always Carry a Bomb When You Fly! (Part 8 of Cognitive Biases)

Always carry a bomb when you fly. One bomb on a plane is very improbable, so according to the law of averages, the chance of a second bomb is almost impossible!

A cognitive bias is a pattern of deviation in judgment that occurs in particular situations, and boy howdy, are there a lot of them! This week we’re covering the gambler’s fallacy and the halo effect.

Click here and scroll to the bottom for a list of previous installments.

Gambler's fallacy

The gambler’s fallacy is a cognitive bias that promotes the belief that if a random sequence shows a deviation from expected behavior, then it should be evened out by an opposite deviation in the future. But as anyone who’s thought their number was “due” knows, it ain’t necessarily so.

If you’ve flipped 5 heads in a row, the gambler’s fallacy suggests that the next coin flip is more likely to be tails than heads. And indeed, the chance of flipping 5 heads in a row is only 1/32. But the chance of flipping 4 heads and 1 tail (or any other combination of 5 heads and tails) is the same 1/32. Once four heads have been flipped, the next toss of the coin is the same 50%/50% as the others.

So far, obvious enough, but there are two related fallacies and a couple of exceptions. The reverse gambler’s fallacy is the belief that if the universe is showing a predisposition toward heads, then heads are cosmologically more likely. Assuming the coin is fair (not one of those double-headed types), that’s equally false.

The inverse gambler’s fallacy (term coined by philosopher Ian Hacking), is the fallacy of seeing an unlikely outcome of a random process and concluding that the process must therefore have occurred many times before. If you roll a pair of fair six-sided dice and get 12, it’s wrong to suppose there’s any support for the hypothesis that these dice have been rolled before.

The gambler’s fallacy doesn’t apply when the probability of different events is not independent. If you draw a card from a deck (let’s make it a 4), then the chance of drawing another 4 is reduced, and the chance of drawing a card of another rank is increased. It also doesn’t apply if the outcomes aren’t equally probable. If those six-sided dice keep rolling boxcars, after a while it’s reasonable to suspect they may be loaded.

The gambler’s fallacy is related to two other cognitive biases, the clustering illusion (part 5) and the representativeness heuristic. The latter bias is the belief that a short run of random outcomes should share the properties of a longer run. Out of 500 tosses of a fair coin, the number of heads and tails are very likely to balance out, but that doesn’t mean the same thing will hold true in a sequence of 5 or 10 tosses.

Halo effect

Some years back, I was on a seminar trip in Texas and Louisiana. A huge storm shut down air traffic, and in the process I got separated from my luggage. The next day, I had to teach a seminar in blue jeans and a day-old dress shirt. The audience was very sympathetic — there was major flooding in Baton Rouge and several of them had disaster stories of their own to tell — and the seminar went well.

When I received my evaluation statistics a couple of weeks later, I was fascinated to find that my scores had dropped nearly 25% below my averages. It was certainly understandable that my scores for “Instructor’s appearance was professional” would drop, but there were drops in “Instructor had a good command of the material” and “The workbook contained information that will be of use to me after the seminar.”

That’s the halo effect, the tendency for people to extend their assessment of a single trait so that it influences assessment of all other traits.

There are other examples. In the 46 US presidential elections where the height of both candidates are known, the taller candidate won the popular vote 61% of the time and the shorter 33% of the time. (In three cases, the candidates were of the same height, and in three other cases, the taller candidate won the popular vote but lost to the shorter candidate in the Electoral College — most recently in 2000.)

In 1977, psychologist Richard Nisbett ran a series of experiments on how students made judgments about professors, demonstrating not only how strong the effect is, but also how much people are unaware when they’re affected by it. At least five people at that seminar in Baton Rouge assured me they didn’t mind my jeans at all. Two even said they preferred a more casual look for the instructor.

But the numbers told the truth.

Tuesday, January 26, 2010

Sons are for Fathers the Twice-Told Tale

This week’s SideWise Insight is about fathers and sons.

My father, Odell F. Dobson, died peacefully in his sleep on January 21, 2010. He was 87 years old. His passing was not unexpected, so I had the opportunity to see him one final time over Thanksgiving. Although my father and I had a difficult relationship at best, the occasion of his death has served as an opportunity for contemplation, healing, and closure.

Odell F. Dobson came from hardscrabble beginnings in Virginia, where but for the timely intervention of the Second World War, he might have remained. He served in the Army Air Corps as a waist gunner on a B-24, was shot down over Germany, and spent the remainder of the war as a POW.

His true story has been incorporated into three books: Bruce Lewis’ nonfiction account in Four Men Went to War, and two of my alternate history novels with Doug Niles, Fox on the Rhine and Fox at the Front.

As a memorial, I published a 72-page issue of my occasional private magazine (fanzine), Random Jottings, available as a free PDF download here.

The following is an extension of the eulogy I gave at his memorial service in Decatur, Alabama.

* * *

Sons are for fathers the twice-told tale.

When my son James was born in 1995, I had been estranged from my father for several years, after a disastrous expedition to the Gettysburg battlefield. It was not the first time we’d had issues. Like fish and unwelcome guests, we could be around each other only for about three days at the most before the inevitable explosion. My father had “anger management issues” and could be a vicious verbal opponent, and I am very much my father’s son.

Mark Twain famously said, “When I was a boy of fourteen, my father was so ignorant I could hardly stand to have the old man around. But when I got to be twenty-one, I was astonished by how much he'd learned in seven years.”

There’s nothing unusual in a son having trouble with his father. It’s one of the reasons we sons eventually leave home — and in that sense, it’s a good thing, a necessary part of the manhood journey.

Fortunately, the requirement of the commandment is to “honor thy father and thy mother.” Loving, or even liking, is a different matter, but honoring is not optional. By adopting a trait or characteristic as our own, we do it honor. Sometimes that means we honor the wrong things.

To honor my father, I simply have to look inward, at the characteristics of Odell Dobson that I possess. Some make me a better man. Others I struggle against.

Sons are for fathers the twice-told tale.

* * *

My son James, 14 years old at this writing, has in the past year turned himself from a boy into a man. A young man, a junior man, a trainee man — but a man, not a boy any longer.

I only have a limited amount to do with the man he has become, and that’s good. I was never under any illusion I would be a perfect father, and I’ve wanted James to have a wider experience of life than I could ever have provided.

Sons are for fathers the twice-told tale.

* * *

What I didn’t expect is the extent to which my son has raised himself.

James is his own man, a clone of no one. I can see some of myself in him, but he contains multitudes. Among those multitudes, I can see the line of his ancestors.

In my management books, I use a lot of historical stories to illustrate contemporary poets. “History doesn’t repeat itself,” Mark Twain also said, “but it rhymes.” The rhymes of history are where the great lessons hide.

And the rhymes of history helped me to unravel the essential truth of my father, and of myself, and of my son.

Sons are for fathers the twice-told tale.

* * *

I knew my father had grown up in poverty in a mill town in southern Virginia, and that his father — my grandfather — had abandoned his family sometime in the 1930s. I finally met the mysterious Robert Franklin Dobson in the early 1970s.

Here is the entire story of my relationship with my grandfather Dobson.

Rob Dobson had contacted my father out of the blue a few months previous. He was dying of cancer, and he and my father reconciled as much as was possible. I stopped off in Danville, Virginia, on my way somewhere else. At the time, my grandfather was living with his fifth wife in a garret apartment without air conditioning.

He was drunk, and the place was littered with little Miller High Life bottles. He wore only a thin blanket in the summer heat. His surgical scars were showing, and the place smelled of death. He called me over, and when I got close, grabbed me by the shirt collar and pulled me close.

“You must be Michael,” he said, and I acknowledged that I was.

“I’m glad to meet you,” he continued. “I want you to know that you’re a Dobson. Don’t ever forget it. And Dobsons…” he paused, “…have big noses.”

After another pause, he added, “I’m the oldest Dobson, and I’ve got the biggest nose,” he said, and it was true.

And then he gave me one more insight. “Don’t you ever believe in God,” he said, “because that SOB never did a damned thing for me.”

He then turned to his half-brother, who had taken me over, and demanded more beer. We left a few minutes after that.

He died two weeks later.

I never saw him again.

Sons are for fathers the twice-told tale.

* * *

In that encounter, I realized something about my father that could only be seen in the context of his father: whatever problems I might have had with my father, he had given better than he got.

He once said to my sister, “You three children have turned out spectacularly well, and I had nothing to do with it. I felt I had to work all the time.” But in that statement alone, he had far outshone his own father.

As I’ve mentioned, my father had a terrible temper, and it was scary to behold. More than once, I’ve seen violence in his eyes, and at least once, I felt like keeping a close eye on the Beretta that was always in his pocket. However, not once have I ever known him to raise his hand against any man, no matter how strong the feelings that raged inside him.

He gave better than he got.

* * *

And what about Rob Dobson — drunk, wife-beater, child-beater? Well, he has a story, too.

There’s a photograph of Rob Dobson’s father in my mother’s kitchen. John Dobson is wearing a derby and holding a cigar in his right hand. John couldn’t abide children, so left his wife when Rob was young. Rob’s mother remarried, and they had a child, Rob’s half-brother.

And then she died.

The widower remarried, and the new couple also had a child. Rob was half-brother to one person in the house, and no kin to anyone else.

He was on his own at a very young age.

Rob Dobson built a successful business in North Carolina. He was making money, and setting up a better life for himself and his family. Then came the Great Depression, and he was lucky to have a job in the mills, though it was beneath him. His resentment found solace in a bottle.

Rob Dobson tried to give better than he got, but life broke him.

* * *

Putting my father’s story and my grandfather’s story in context has been of tremendous benefit to me. Understanding isn’t the same as forgiveness, but it doesn’t need to be. In some ways, understanding is even more important. I came to understand my father, to put his life and his actions in context.

And I knew it was okay that I would not be a perfect parent. My father raised the bar, and it was up to me to raise it again. It will be up to my son in turn to give better than he got.

Sons are for fathers the twice-told tale.


Sunday, January 17, 2010

I'm OK, You're Nuts (Part 7 of Cognitive Biases)

“All the world old is queer save thee and me, and even thou art a little queer,” observed 19th social crusader Robert Owen. The question of whose thought is mainstream and whose is on the fringes is often quite contentious. As usual, it’s also influenced by cognitive bias.

A cognitive bias is a pattern of deviation in judgment that occurs in particular situations, and boy howdy, are there a lot of them! Links to the first six installments are provided below.

Today’s episode is brought to you by the letter “F.”

False consensus effect

I spent my teenage years deep in the heart of Red America: Decatur, Alabama. As late as the 1960s, it was the largest American community that still practiced prohibition. It wasn’t until several years after I left high school that the possession of alcoholic beverages in your own home was decriminalized. Decatur schools did not desegregate until my junior year. Just about everyone around me was extremely conservative and held some version of fundamentalist or evangelical Christian faith. Today, I live in Bethesda, Maryland, made famous in Bobos in America as the spiritual capital of Blue (liberal) America. Montgomery County, Maryland, went 3-1 for Obama in 2008. And we’re in the more liberal part of the county.

In both settings, I’ve noticed the same phenomenon: a presumption that all normal, right-thinking people share the same basic outlook on life. In Alabama, where I was very much the exception, I noticed it very clearly. Here, where my personal and political values are comparatively mainstream, I notice it as well.

The false consensus effect is the degree to which you overestimate how much other people agree with you and see the world the same way. Whether your information sources tend toward NPR or toward Fox, it’s easier today than ever before to get all the news that fits your perspective. The more you see your own values front and center, the more they’re validated as normal — and the more out of touch and fringe-extremist people on the other side appear.
But that’s an illusion.

Although according to the Gallup organization, self-identified conservatives outnumber self-identified liberals by 40% to 21%, the combination of moderates and liberals tips the balance to 51% the other way. (As liberals know, there is no actual liberal party in the United States; in the Democratic Party, self-identified moderates outnumber self-identified liberals. Fully 22% of Democrats call themselves conservative, as opposed to 3% of Republicans who self-identify as liberal.)

In other words, no matter what you believe, at least half the nation disagrees with you. Although very conservative and very liberal perspectives both get a lot of press (often generated by the other side), only 9% of the American public self-identifies as “very conservative” and 5% as “very liberal.”

False consensus accelerates because people tend to live near and associate with those who agree with them on core issues, leading people to conclude that the universal attitude around them (whether it’s Bethesda or Decatur) must extend beyond the city borders. But there are a significant number of conservatives in Bethesda, and more liberals in Decatur than you’d think. (Several Alabama counties — not Decatur’s, mind you — consistently vote blue, though the state as a whole is clearly red.)

The false consensus effect dramatically complicates communication. People talk past one another, each unaware the other operates from a different paradigm. When people are confronted with evidence that the consensus is indeed false, the normal reaction is to conclude that those who do not agree are defective — blind, immoral, corrupt, under undue influence. Ad hominem abuse seems reasonable enough under such circumstances, and the cycle of viciousness rolls forward.

There’s a related bias known as pluralistic ignorance, in which people openly support a norm or belief they privately reject, for reasons ranging from the desire to fit in to fear of negative consequences for violating the norm. This, of course, provides even more reinforcement for false consensus. Over the last 40 years, I’ve had more than one classmate tell me that they agreed with far more of my political positions than they ever let on. That may be pluralistic ignorance, or it could be…

False memory (Confabulation)

There’s lying, and then there’s confabulation. In confabulation, your mind has created false memories about yourself or your environment. Sometimes imagination has been confused with memory, and sometimes one memory is confused with another. A person with a false memory isn’t telling the truth, but has no intent to lie.

Obviously, in significant degrees this can be a sign of psychological or neurological impairment, but most of us star in our own private Rashomon.

A number of cognitive biases affect your memory.
  • Consistency bias (remembering your past attitudes and behavior as resembling your present ones)
  • Cryptomnesia (mistaking imagination for memory, covered in Part 5)
  • Rosy retrospection (rating past events as better than they appeared at the time)
  • Suggestibility (ideas suggested by a questioner are mistaken for memory)
Cognitive biases also adjust the memory to fit preconceptions or other fixed ideas.

Treat your memory with skepticism. Interrogators of all stripes know that eyewitness accounts are hugely unreliable, confessions often meaningless, and detailed accounts subject to huge bias.

If it’s important, you need to confirm your memory with other sources. Just because you remember it clearly doesn’t mean it’s true.

Forer effect (Barnum effect)

One year at a SkillPath trainer’s conference, there was a speaker who could communicate with departed loved ones, and he put on quite an impressive show involving one member of the audience, who was blown away by how accurate the speaker was.

Even skeptics have moments in which a random astrology squib in the daily newspaper seems accurate, and I’ve had a few fortune cookie experiences that are nothing short of amazing. (My favorite: “You have great power and influence over women. Use it wisely.”)

Meet the Forer effect.

The Forer effect explains why mass-market astrology, personality tests, and fortune telling have such an avid audience of true believers. This cognitive bias makes people tend to give high accuracy ratings to descriptions of their personality that are supposedly tailored specifically for them, but are in fact vague and general enough to apply to a wide range of people.

In 1948, psychologist Bertram R. Forer gave a personality test to his students, then gave each one a “unique” analysis based on the test results. Each student got the same thing, which read:

You have a great need for other people to like and admire you. You have a tendency to be critical of yourself. You have a great deal of unused capacity that you have not turned to your advantage. While you have some personality weaknesses, you are generally able to compensate for them. Disciplined and self-controlled outside, you tend to be worrisome and insecure inside. At times you have serious doubts as to whether you have made the right decision or done the right thing. You prefer a certain amount of change and variety and become dissatisfied when hemmed in by restrictions and limitations. You pride yourself as an independent thinker and do not accept others' statements without satisfactory proof. You have found it unwise to be too frank in revealing yourself to others. At times you are extroverted, affable, sociable, while at other times you are introverted, wary, reserved. Some of your aspirations tend to be pretty unrealistic. Security is one of your major goals in life.

When asked to rate how well this described them, the average rating was 4.26 out of 5.0.

These kinds of generic statements that appear to have insight are known as Barnum statements, after P. T. Barnum. Further research has shown you can improve the accuracy rating people give by making sure the following three things are true:
  • The subject believes that the analysis applies only to him
  • The subject believes in the authority of the evaluator
  • The analysis lists mainly positive traits

Fundamental attribution error (Correspondence bias, Attribution effect)

People on the left experienced the joys of schadenfreude when Rush Limbaugh was accused of illegally obtaining prescription drugs after having himself spent years arguing that those convicted of drug crimes should be sent to jail.

How did he and his supporters rationalize the different treatment for himself? Well, Rush Limbaugh is a fine citizen. He simply suffered from severe back pain and became addicted to prescription painkillers. It was the situation, not the man himself.

But all these other drug users whom we don’t know, well, their problem is more likely to be a moral defect. Their personalities and characters lead them into terrible behavior, and we as a society have no choice but to make them pay for their crimes.

The cognitive bias known as fundamental attribution error is our tendency to ascribe our own bad behavior, or bad behavior in those we like, to the circumstances or situation. We tend to believe, however, that bad behavior on the part of those we dislike or don't know is related to some attribute of personality or character. This creates circular logic loops that are difficult to break. “The reason so many [group] are unemployed is that they're lazy. That's why I don't hire them.”

When the unemployed Alfred Doolittle in Pygmalion talks about the difference between the “deserving poor” and the “undeserving poor,” he adds:

I'm one of the undeserving poor: that's what I am. Think of what that means to a man. It means that he's up agen middle class morality all the time. If there's anything going, and I put in for a bit of it, it's always the same story: “You're undeserving; so you can't have it.” But my needs is as great as the most deserving widows that ever got money out of six different charities in one week for the death of the same husband. I don't need less than a deserving man: I need more. I don't eat less hearty than him; and I drink a lot more. I want a bit of amusement, cause I'm a thinking man. I want cheerfulness and a song and a band when I feel low. Well, they charge me just the same for everything as they charge the deserving. What is middle class morality? Just an excuse for never giving me anything.

Alfred Doolittle is a delightful caricature, but real people are complex mixtures of character and environment. Attributing 100% of behavior to one or the other, except in the most extreme of circumstances, is a dangerous and hurtful mistake. Error is often unavoidable, so my own goal is to err on the side of generosity.

Previous Installments

Part 1 — Bias blind spot, confirmation bias, déformation professionnelle, denomination effect, moral credential effect
Part 2 — Base rate fallacy, congruence bias, experimenter’s bias
Part 3 — Ambiguity aversion effect (Ellsberg paradox), choice-supportive bias, distinction bias, contrast effect
Part 4 — Actor-observer bias, anchoring effect, attentional bias, availability cascade, belief bias
Part 5 — Clustering illusion, conjunction fallacy, cryptomnesia
Part 6 — Disposition effect, egocentric bias, endowment effect, extraordinarity bias

Tuesday, January 12, 2010

Risk Management and Global Warming, Part 2

In last week’s installment, I laid out a risk management approach as it applied to global warming.

In summary, a risk is some event (threat or opportunity) that has a probability of happening, and has an impact if it does happen. The value of the risk is the probability times the impact (R=PxI). Compare the value of the risk to what it would cost to shrink it or make it go away. Sometimes it’s cheaper to deal with the risk, sometimes it’s cheaper to accept it.

When it comes to global warming, most of us have some doubt. To have an educated opinion about something, you need an education. I’m not a climate scientist, nor do I play one on TV. Any opinion I have on the subject has to be tinged with uncertainty.

But risk is all about uncertainty.

More About Probability

The sum of all the probabilities of the possible outcomes is one, or 100%.

  • What’s the chance of flipping heads or tails on a coin? 100%. (Yes, I’m neglecting the possibility it lands on the edge.)
  • What’s the chance of rolling a 1, 2, 3, 4, 5, or 6 on a six-sided die? Well, 100%.
  • What’s the chance that the effect of any global warming turns out to be either positive, nothing, minor, moderate, serious, or catastrophic? ______%
Oh, let’s not always see the same hands.

What's Your Risk Number?

If you’re not persuaded by the science, if you’re unsure what level of action — if any — is prudent, here’s how to calculate your own risk value. That gives you a financial base to work from in evaluating proposed responses.

Let me be clear up front — this is an extremely simple model that leaves out or abstracts hundreds of important considerations. (I'm open to suggestions.) This model has one purpose: to provide a number that has enough meaning to help you compare the cost of potential solutions to the cost of accepting whatever risk you may think is present.

The number also helps compare where you are in comparison to someone else, which can help bring clarity to a discussion. If person A thinks the risk should be valued at $25 billion tops, and person B thinks a conservative valuation of the risk is more like $5 trillion, then we know something about how each person sees the world.

The common ground in any argument is always and necessarily the lowest number on the table. In the discussion above, the common ground is any response to the risk that costs less than $25 billion. (A lot of people who oppose action support continued study, which is certainly part of the family of legitimate risk responses.)

Download the Spreadsheet

I've developed a spreadsheet that does this automatically for you. The link is below. Just click to download the Excel file to your desktop.

Click here to download my Global Warming Risk Calculator spreadsheet. (Permission is given to distribute it freely; in fact, please do.)

Using the Global Warming Risk Calculator

To perform this exercise, you have to divide 100% (the sum total of all the possible outcomes) among the following categories, based on how likely you think each outcome is. I’ve put down a potential cost number for each outcome; feel free to change the number if you evaluate the impact differently. And, while you’re at it, if you think there should be more outcome categories, add those too. But you still can only divide a total of 100% among them.

For purposes of illustration, I’ve added in my own personal probability guesses. As you can see, I’m not exactly a died-in-the-wool true believer.
  • Positive. Climate change is a net benefit for the world. (Economic impact: -$1 trillion — save or make money) (My guess: 0.9%, with a leftover .01% chance for Very Positive, -$10 trillion)
  • Nothing happens. (Economic impact: $0) (My guess: 32%)
  • Minor. There are some spotty problems, but they’re manageable. (Economic impact: $1 trillion) (My guess: 30%)
  • Moderate. There are significant increase in major weather catastrophes; regional crises more common. (Economic impact: $5 trillion) (My guess: 30%)
  • Serious. Global warming impact becomes the dominant geopolitical issues of the time. (Economic impact: $10 trillion) (My guess: 5%)
  • Catastrophic. There are massive die-offs, huge geographic dislocations, and major coastal flooding. (Economic impact: $25 trillion) (My guess: 2.9%, with the remaining 0.1% for the $100 trillion life-as-we-know-it-is-over jackpot.)
The next step is to calculate the expected monetary value (EMV) of the risk. That’s the PxI for each of the steps above.
  • Positive (= 0.9% x -$1 trillion, plus 0.1% x -$10 trillion)
  • Nothing (= 32% x $0)
  • Minor (= 30% x $1 trillion)
  • Moderate (= 30% x $5 trillion)
  • Serious (= 5% x $10 trillion)
  • Catastrophic (= 1.9% x $25 trillion, plus 0.1% x $100 trillion)
Add those answers together, and that’s how much money I think the risk is worth. When you plug in your numbers, you’ll get your valuation of risk.

On the second page of the spreadsheet, you'll see a UN-prepared chart on the cost of reducing carbon emissions by 2050 to achieve various targets. I've turned those GDP percentages into actual dollars, then converted those back to 2010 present value numbers.

The spreadsheet will automatically compare your valuation of the risk against the various UN targets, and tell you whether according to your own risk assessment whether the cost is too much or worth considering.

Try It Yourself!

I’ll wait.

.
.
.
.
.
.
.
.


How did you value the risk? Please post it as a comment. I'd love to know what you think.

Now What?

What, in your opinion, does that number suggest to you?

My global warming risk number is $2.86 trillion, and as you can see I’m not a passionate believer in imminent global catastrophe. That’s quite a bit of money; I was a little surprised at the answer when I did the math. (As my friend Misha pointed out in his comment on the previous installment, with that much money, at least some should surely be spent on mitigating secondary impact, as costs often fall heavier on the poor.)

Comparing that number to the cost of stabilizing CO2 emissions, it looks like I'm in favor of action, though that 450 ppm target is a close thing. But for 550 ppm, the numbers look good to me.

Obviously, I’d just as soon we don’t spend all that money unless we really have to. It’s always okay to spend less if that solves the problem. But with that much at stake, it seems to me that prudence argues for at least some significant action.

It also argues for continued research and refinement. Those probability and impact assessments won’t stay static. They may get worse, or they may get better.

And as they change, so too should our own opinions and beliefs.

And In The End...

I'm neither expecting to nor intending to change anyone's opinion on this question. I'm simply looking for common ground and a different way to frame this intractable issue.

What's your number? I'd really like to know

Monday, January 4, 2010

Risk Management, Cognitive Bias, and the Global Warming Debate

The debate on global warming tends to revolve completely around the science. Is it good? Is it bad? Is it meaningful? Is it corrupt? Everyone has an opinion on the quality of the science, and once those opinions are formed, they’re almost impossible to shake.

A wide variety of potential cognitive biases complicate the picture. Notice there’s enough here for everybody — no one’s being singled out.

Base rate fallacy — ignoring statistical data in favor of particulars
Confirmation bias — interpreting information to confirm your preconceptions
Experimenter’s bias — with about sixty subsets
Focusing effect — putting too much emphasis on a single aspect of a situation or event
Framing — viewing through a perspective or approach that is too narrow
Hyperbolic discounting — the preference for more immediate payoffs over the long term
Irrational escalation — making irrational decisions based on rational decisions in the past, or to justify actions already taken
Information bias — seeking more information even when it cannot affect action or decision

…the list goes on. Recognize some of these biases? If you’re like most of us, you recognize them in the other side more than you see them in yourself or those who agree with you.

Part of the reason why cognitive bias is at work is that the question isn’t really clear. We’re all arguing about the science, though few of us are truly entitled to an educated opinion on the subject.

But what’s the question?

It’s not about whether a scientific opinion is correct or incorrect. That sort of thing only interests specialists. No, the question has to do with what (if anything) should we do about it, based on the potential cost and consequences.

In other words, it’s a question of risk management. And to the extent that it’s a question of risk management, it’s phrased wrong.

A risk, as you’ll remember, is a future event with some probability of happening that if it happens will have a meaningful impact on your situation. If the impact is negative, it’s a threat. If the impact is positive, it’s an opportunity.

Risks, like Gaul, can be divided into three parts. The first part is probability. How likely is it that the risk will happen?

The second part is impact. If the risk should happen, what would be its effects?

Those two parts combine in the formula R = P x I to calculate a risk score, the value of the risk.

We care about the value of the risk because that helps us make a rational decision about the third element: the cost of reducing or eliminating the negative risk, or the cost of obtaining or exploiting the positive risk.

Probability

The argument about the science of climate change is at root an argument about probability. The process of science involves collecting data, discovering patterns in that data, and developing and testing hypotheses and theories about that data. Over time, the process of peer review creates a consensus in the scientific community, and at any moment in time, that’s the state of scientific knowledge.

Let’s sidestep the discussion about whether the consensus of current scientific knowledge is accurate or inaccurate, and merely assess how our own feeling and opinions influence our judgment of probability. Taking as a guide the legal standards of proof, we might fall somewhere on the following spectrum. For a rough calculation, I’ve put in some percentages.

Degree of Belief (Probability You Think It's True)
True beyond any doubt (99+%)
True beyond a reasonable doubt (95%)
True by the preponderance of the evidence (75%)
Unable to tell (50%)
False by the preponderance of the evidence (25%)
False beyond reasonable doubt (5%)
False beyond any doubt (>1%)

This is about what you believe about the science, and a corresponding figure that relates to how likely it is that the threat is true. If you don't like the choices, add one of your own and choose your own probability number.

Notice that the evidence won't stand still. Over time, science will inevitably get better, regardless of your perspective. Either the evidence of catastrophic global climate change will mount so high no sane person can deny it, or global warming will become the Comet Kohoutek of crises, a non-event. Or maybe something in between.

The problem is by the time the facts become incontrovertible, the moment for decision will have passed. If we guess wrong, there are two possibilities: (a) we will be in a significantly worse position to deal with the resultant impact, or (b) we will have wasted significant resources.

Impact

This leads us to the second item, the question of impact. Impact is the effect of the threat or opportunity if it happens — even if you believe the chance is remote at best. So we have to set probability aside temporarily. We’ll come back to it in a moment.

In addition to arguments about how likely it is that the scientific consensus on global warming is in fact correct, there is a range of opinion as to what that means in practical terms: a range of impact. I've specified a set of potential impact levels and set costs for each. Remember, the issue isn't whether these are going to happen. They're simply descriptions of the potential level of impact that different parties suggest are possible.

So choose from the list below. What, in your opinion, is the worst possible potential outcome if global warming happens?

  • Catastrophic. Global warming effects will kill tens or hundreds of millions of people directly and indirectly, wipe out tens of thousands of species, and be an economic and social catastrophe to those who survive. Repair or rebuilding may or may not be possible. (Cost = $Quadrillions)
  • Serious. Major weather events, such as hurricanes and tsunamis will be more prevalent, tens and hundreds of thousands will die, economies will suffer. (Cost = $Trillions)
  • Moderate. Managing environmental issues will be a consuming issue, but better management and improved technology will make this a background costs. (Cost = $Billions)
  • Minor. Insignificant costs. (Cost < $Millions)
Notice the impact could also be positive.

Value of the Risk

Just because you aren't convinced the evidence in favor of a risk is certain doesn't mean you don't act on it. We take everyday precautions to avoid low probability or highly uncertain risks with potentially high impact all the time — every time we drive on a freeway, for example. But there's a limit. How does the value of the risk compare to the cost of mitigation?

The value of the risk, as we’ve noted, is the probability times the impact. From our earlier work, we can construct this table. The risk score in each case is what you should reasonably be willing to spend if necessary to mitigate the degree of risk you personally believe is present.

Catastrophic
95% confident, $Quadrillions
75% confident, $Low Quadrillions
50% confident, $1 Quadrillion
25% confident, $High trillions
5% confident, $Low trillions

Serious
95% confident, $Up to 1 Quadrillion
75% confident, $750 trillion
50% confident, $500 trillion
25% confident, $250 trillion
5% confident, $50 trillion

Moderate
95% confident, $Up to 1 Trillion
75% confident, $750 billion
50% confident, $500 billion
25% confident, $250 billion
5% confident, $50 billion

Minor
95% confident, $Possibly a few billion
75% confident, $Less than a billion
50% confident, $500 million
25% confident, $250 million
5% confident, $Low millions

Cost of Mitigation

The value of the risk is what you’re willing to spend if necessary. Depending on how you assessed probability and impact, you ended up with some amount of money (perhaps $0) that's appropriate as a maximum to spend on the risk.

Of course, you need to compare that to the cost of mitigating or eliminating the risk. Sometimes, it’s not worth it. If I offered to save you from a $1,000 risk in exchange for $2,000, it’s not much of a deal. In general, if the cost of getting rid of the risk exceeds the cost of living with it, you’re better off living with it.

On the other hand, if I can save you from a $1,000 risk (say, a 25% chance of losing $4,000) for only $500, that's a pretty good deal. If the risk happens, you've saved $3,500. But if the risk doesn't happen, you're still out $500.

It's true that not all costs of a risk (or costs of a risk mitigation) can be easily translated into dollar terms — or even should be. That doesn’t change the basic principle, though: the cost of dealing with the risk has to be less than the cost of living with the risk.

There’s an important qualification when it comes to risk mitigation. Some risks you can get rid of altogether if you’re willing to pay the price. Other risks you can reduce, but not eliminate. You can lower the probability of the event occurring, or you can lower the impact if it should occur.

That’s not a bad thing, mind you, but you have to take into account the residual risk when deciding if the strategy is worth it. The value of that risk is the difference between the cost of the original risk and the cost of the residual risk.

The Right Question

To have a reasoned discussion on the subject of global warming, you have to figure out where you are on five issues, not merely one.

1. How correct is the scientific consensus on global warming?
2. What is the impact of global warming if it should occur?
3. What is the value of the risk (probability times impact)?
4. What is the cost of mitigating or eliminating the risk, and how much residual risk would remain?
5. In balance, what level of action on global warming (if any) is warranted?

To change someone’s opinion, you have to change that person’s evaluation of at least one of these issues.

As people on all sides have found, it’s nearly impossible to change anyone’s evaluation of the quality of science, which is our probability benchmark. There’s often more consensus of what global warming might mean if it happens, which is why it’s so important to separate discussion of probability from the discussion of impact.

But the real opportunity has to do with the issue of cost. The best current framing of the debate comes from the argument that dealing with global warming and environmental issues can be relatively low in cost, or ideally profitable.

If the cost to deal with global warming is low enough, it's a good idea even for those who think the probability is low.