Showing posts with label zero-risk bias. Show all posts
Showing posts with label zero-risk bias. Show all posts

Tuesday, March 15, 2011

Fukushima Number One

As reported in my article "Homer Simpson: Man of the Atom" in Trap Door magazine, I once got to run a nuclear reactor — admittedly, a low-power one used only for training students. This hardly makes me an authority on nuclear power, but I do know something about risk management.

Like many of you, I'm following the evolving Fukushima Dai-ichi Nuclear Power Station story with great interest. I'm a pro-nuclear safety conscious environmentalist, if that makes any sense. I think a lot of anti-nuclear sentiment is rooted in emotion rather than analysis, and contains the same anti-science bias that I object to so strongly when practiced by the right wing.

That doesn't make the case for nuclear power a slam dunk by any means. The downsides are obvious and substantial, and the tendency to rely on nuclear power generation to supply plutonium for other purposes has led to what seem to me to be false choices. I'm following with interest the discussion of thorium reactors, and I think the investment we're making in fusion is ridiculously low. That doesn't mean I don't like wind and solar as well. But all forms of power impose risks and costs.

The question in risk management isn't whether a proposed solution has drawbacks (technically known as secondary risks). Most proposed solutions, regardless of the problem under discussion, tend to have secondary risks and consequences.

The three questions about secondary risk that matter are:

  1. How acceptable is the secondary risk? The impact and likelihood of secondary risks can vary greatly. Some secondary risks are no big deal. We accept them and move on. Others are far more serious. A secondary risk can indeed turn out to be much greater than the primary risk would have been.
  2. How manageable is the secondary risk? A secondary risk, like a primary one, may be quite terrible if you don't do anything about it. The key word, of course, is "if." What can be done to manage or reduce the secondary risk? 
  3. How does the secondary risk compare to other options? As I've argued elsewhere, the management difference between "bad" and "worse" is often more important than the difference between good and bad. If the secondary risk of this solution is high, and if you can't do anything meaningful to reduce it, you still have to compare it to your other options, whatever they are.
In the case of nuclear power, the unmitigated secondary risk is unacceptably high. But all that does is demonstrate that the risk needs to be mitigated — reduced to some acceptable level. Ideally, that level is zero, but that may not be possible, and it may not be cost-effective to reduce it beyond a certain point. The leftover risk, whatever it is, is known as residual risk. Residual risk is what we need to worry about. Like with secondary risk, the three questions of acceptability, manageability, and comparison help us judge the importance of the residual risk.

We make one set of risk decisions at the outset of the project. We decide which projects we want to do; we decide what overall direction and strategy we will follow; and we decide what resources to supply. All the decision are informed by how people perceive the risk choices.

As the project evolves, the risk profile changes. Some things we worry about turn out to be non-issues, and other times we are blindsided with nasty surprises. Our initial risk decisions are seldom completely on target, so they must evolve over time.

When disaster strikes, suspicion automatically and naturally falls on the risk planning process. Were project owners and leaders prudent? Armed with the howitzer of 20-20 hindsight, the fact of what did happen carries a presumption of incompetent planning for those who failed to anticipate it. Sometimes it's a fair judgment. Other times not so much.

I'm still working out what I think about the Fukushima case, but some initial indications strike me as positive when it comes to evaluating the quality of the risk planning. The basic water-cooled design of Fukushima made a Chernobyl outcome impossible. The partial meltdown didn't rupture the containment vessel, and although the cleanup will be messy and expensive, it's not likely to spread outside the immediate area.

The effects of radiation may not be known for some time, but even those have to be put into perspective. Non-nuclear power plants, however, cost lives too, even though you don't hear about these disasters as often. A quick Google search turned up the following:

  • September 2010: Burnsville, Minnesota, explosion, no deaths.
  • February 2010: Connecticut, 5 dead
  • February 2009: Milwaukee, 6 burned
  • June 2009: Mississauga, Ontario

And, of course, several thousand people a year die mining coal.

Tuesday, January 18, 2011

A Credit To His Race (The final installment of Cognitive Biases)

At long last, we reach the end of our series on Cognitive Biases. In this installment, we'll study the ultimate attribution error, the valence effect, the von Restorff effect, wishful thinking, and the zero-risk bias.


Ultimate Attribution Error

A phrase I used to hear from time to time in my Alabama days was, “He’s a credit to his race.” It was never used to refer to a white person, of course, but only to blacks. On the surface, it appears to be a compliment, but it’s an example of the ultimate attribution error.

In the ultimate attribution error, people view negative behaviors on the part of members of an outgroup as a normal trait, and positive behavior as exceptions to the norm. It relates to the fundamental attribution error, in which we explain our own behavior as reactions to situations and other peoples’ behavior as a matter of basic character, and clearly relates to stereotyping. Ultimate attribution error is one of the basic mechanisms of prejudice.

Valence Effect

In psychology, valence refers to the positive or negative emotional charge of a given event or circumstance. The valence effect is a probability bias in which people overestimate the likelihood of something good rather than something bad: it’s the basic mechanism that stimulates the sale of lottery tickets.

There are numerous studies that demonstrate the valence effect. In one study, people assigned a higher probability of picking a card with a smiling face than one with a frowning face in a random deck.

The valence effect can be considered as wishful thinking, but it’s been shown in some case that belief in a positive outcome can increase the odds of achieving it — you may work harder or refuse to give up as early.

Von Restorff Effect

First identified by Dr. Hedwig von Restorff in 1933, this bias (also called the isolation effect) predicts that an item that "stands out like a sore thumb" (called distinctive encoding) is more likely to be remembered than other items. For instance, if a person examines a shopping list with one item highlighted in bright green, he or she will be more likely to remember the highlighted item than any of the others.

Wishful Thinking

This popular cognitive bias involves forming beliefs and making decisions based on your imagination rather than evidence, rationality, or reality. All else being equal, the valence effect holds: people predict positive outcomes are more likely than negative ones.

There is also reverse wishful thinking, in which someone assumes that because it’s bad it’s more likely to happen: Murphy’s Law as cognitive bias.

Wishful thinking isn’t just a cognitive bias, but a logical fallacy: I wish that P would be true/false; therefore, P is true/false. It’s related to two other fallacies that are reciprocals of one another: negative proof and argument from ignorance.

In negative proof, the absence of certainty on one end of the argument is taken as proof of the opposite end: climate scientists cannot say with 100% certainty that their claims about global warming are true, therefore, they must be false. The reciprocal fallacy is known as the argument from ignorance: no one can be sure that there is no God; therefore, there is a God.

Zero-Risk Bias

Since 2000, terrorists attacks against the United States or Americans abroad have killed about 3,250 people, the vast majority of them on 9/11. Your odds of being a victim are about one in ten million.

The Transportation Security Administration consumes $5.6 billion a year. Its job is to reduce the chance of terrorist attacks on transportation infrastructure, primarily air, to zero. Let’s assume that they are completely effective in their mission. If so, the cost per life saved is $1.7 million.

Perhaps that’s a completely reasonable price to pay to save a human life. However, from a logical point of view, you have to consider what else $5.6 billion might accomplish. Over a ten-year period, about 420,000 people die in car accidents. If $5.6 billion would eliminate 100% of the risk of aviation terrorist deaths, or 10% of the risk of car accident deaths, which risk would you chose to attack?

Common sense argues for a 10% reduction in car accidents, but the zero-risk bias argues the opposite: it’s the preference for completely eliminating a risk (even if small) to reducing a larger risk. It values certainty over residual risk.

There are other arguments that can be made in support of anti-terrorist activities, but the zero-risk bias is also operational here, and it leads to faulty decisions.


______________________________________________



With this installment, our long march through the wilds of cognitive bias comes to an end. I deeply appreciate the many insightful comments you’ve provided.


And now for something completely different…