Tuesday, December 27, 2011

A Cute Angle (Fallacies/Red Herrings Part 19)


Part 19 of Red Herrings covers still more responses to arguments that distract from the argument rather than address it directly. This week, the Genetic Fallacy.

Genetic fallacy

A guy I went to high school with used to claim that calling someone “cute” was an insult, because “cute” originally meant “bowlegged.” According to the OED, that’s wrong. “Cute” is an aphetic (initial vowel dropped) version of “acute,” and originally referred to someone sharp, clever, or quick-witted. A secondary meaning lists “cute” as a synonym of cur, a worthless dog.

Of course, that’s not the sense in which we most commonly use the word today. If I refer to someone as “cute,” I don’t mean bowlegged, clever, or dog-like — I’m usually referring to someone’s physical appearance. (There’s a sarcastic version — “Are you being cute?” — that refers back to the original meaning, but tone of voice usually makes it clear which meaning we intend.)

The genetic fallacy is the argument that because a word or phrase once meant something different, it continues to mean the same thing, regardless of how usage has evolved. The legend that a “rule of thumb” describes the maximum thickness of a stick with which it was permissible for a man to beat his wife has been debunked repeatedly, but even if it were true, it’s not how we use the term today.

Genetic fallacies aren’t always about language. Australia may have been settled in part by British criminals (as was Georgia), but any conclusion about today’s Australians can’t rest on the legal status of its founders.

Tuesday, December 20, 2011

We Stand on the Shoulders of Pygmies (Fallacies/Red Herrings Part 18)


Part 18 of Red Herrings covers still more responses to arguments that distract from the argument rather than address it directly.

Chronological snobbery

We stand, Newton observed, on the shoulders of giants, but intellectually we tend to treat them as pygmies. The practice, for example, of using leeches in medicine isn’t just medieval, it’s positively ancient, with citations going back 2,500 years. When leeching went out of fashion (the late 19th century), it was obvious in retrospect how stupid these ancients were. After all, in those days they still believed the earth was flat.

The red herring of chronological snobbery is the argument that because A is an old argument, dating back to when people believed the obviously-false B, A must therefore also be false. The fact that some ancients (though fewer than you’d suppose) believed the earth was flat doesn’t in itself constitute a valid argument against anything other ancient idea. If you want to discredit A, you have to show it’s false: the proof that B is false may be valid, but utterly beside the point.

Leeches, after all, came back into medical fashion in the 1980s. It turns out that leeches are helpful in the aftermath of microsurgeries, promoting healing by allowing fresh, oxygenated blood to reach the area. The fallacy of chronological snobbery would have led investigators away from looking at a clearly outmoded idea.

Tuesday, December 13, 2011

Anything You *Don’t* Say May Be Held Against You! (Red Herrings, Part 17)


Part 17 of Red Herrings covers still more responses to arguments that distract from the argument rather than address it directly. This week we cover the argument from silence.

Argumentum e Silentio

If I refuse to answer the question, what conclusions can you draw? Unlike most red herring fallacies, this one depends on the circumstances. In pure reasoning, a conclusion drawn from the silence of another is automatically fallacious. But in abductive reasoning (the process of drawing logical inferences), however, it is sometimes quite reasonable to draw meaning from the absence of communication.

If I claim to be an expert speaker of German, but won’t tell you the proper German phrase for happy birthday, which is more probable: (a) I know, but won’t tell you out of spite, or (b) I really don’t know and my claims to be a German linguist are overblown? Common sense suggests (b) is more likely than (a) — after all, it would be so easy for me to just tell you and move on. On the other hand, if I claim to know my wife’s password but refuse to tell you, it’s unreasonable to conclude that because I won’t tell you, I don’t know.

A fallacious use of the argument from silence is to shift the burden of proof. Bertrand Russell wrote that if he claimed a teapot were orbiting the Sun somewhere between the Earth and Mars, it would be unreasonable to expect others to believe him on the grounds they couldn’t prove him wrong. In other words, your silence in being unable to disprove an argument doesn’t constitute proof that the argument was right all along.

Historians sometimes use the argument from silence in drawing conclusions about what one group of people knew about another, on the grounds that some facts are so natural that their omission legitimately implies ignorance.

In American jurisprudence, the right to silence has the effect of barring the argument from silence, although there are some subtle ways to work it in.

Tuesday, December 6, 2011

Impure Motives (Part 16 of Fallacies/Red Herrings)


A short installment this week. Part 16 of Red Herrings covers still more responses to arguments that distract from the argument rather than address it directly. This week we cover the appeal to motive.

Appeal to Motive

Scientist A says that man-made climate change is a real threat. Scientist A is a candidate for tenure at a well-known university, and the head of his department believes the same thing. This means that Scientist A’s motives are impure, and therefore his argument is false.

This is the appeal to motive, another subcategory of the argumentum ad hominem. Its special feature is that it’s only necessary to show that there is a possibility of a motive, however small. What’s missing is proof that (a) the motive actually exists, (b) if it does exist, it played a role in formulating the argument and conclusion, and (c) any other proof or evidence offered is ipso facto invalid.

It’s related to general claims of conflict of interest. In the run up to the Supreme Court hearings on the Affordable Care Act, accusations of conflict of interest have been leveled at three justices, two conservative and one liberal. Showing the potential for a conflict of interest isn’t the same thing as demonstrating that the votes of these justices is necessarily corrupt. In the particular instance, it’s more likely the case that the predispositions of the justices predate the events and actions leading to the charge.

Tuesday, November 29, 2011

Repugnicans and Libtards (Fallacies, Part 15)


Returning now to those thrilling days of yesteryear, the fifteenth installment of my series on argumentative fallacies continues our list of red herrings — responses to arguments that distract from the argument rather than address it directly. This week we cover the abusive fallacy, the appeal to equality, and the appeal to accomplishment.

Abusive Fallacy

Repugnicans and libtards — who could possibly take seriously anything they have to say? The abusive fallacy is an extreme form of argumentum ad hominem in which name-calling overcomes every other part of the discussion. The objective is to smear the individual and group so completely that anything they have to say is discredited.

When someone surrenders to the abusive fallacy, any pretense of rational discussion goes out the window. If Occupy Wall Street protestors are “dirty, smelly hippies,” there’s no reason to address the substance of any of their arguments. Similarly, if there’s nothing to the Tea Party but astroturf racists, then nothing they say need be taken seriously either.

Appeal to Equality

What does “equal rights” mean? After all, people aren’t “equal” in most normative senses. We aren’t all of the same height, or the same age, or the same weight, or the same IQ, or the same income, or the same education. Though I subscribe fully to the moral concept of equal rights, the logical issues of equality are more complex. For example, I believe in equality of marriage rights, but I don’t accept that a fetus should be considered legally equal to a human being. I believe in freedom, but I’m willing for society to impose imprisonment or other penalties for people who commit certain offenses. Is this logically inconsistent?

No. It’s an example of a logical fallacy known as the “appeal to equality.” In other words, citing “equality” as proof that Person A should be treated the same as Person B is insufficient to make the case.

The argument that gays and lesbians should be permitted to marry, or that a fetus deserves the civil rights of a person, requires additional reasoning. This is important, because we all — properly — make distinctions. The murderer does not have the same rights as a non-murderer; a child does not have the same rights as an adult.

It’s not inconsistent or logically inappropriate to make judgments and distinctions; in fact, it’s required.

Appeal to Accomplishment

In 1970, the Nobel Prize-winning chemist Linus Pauling published Vitamin C and the Common Cold, in which he claimed that very large doses of Vitamin C had a variety of health effects. It’s probably fair to say that if I had written that book, no one would have taken it seriously.

The appeal to accomplishment is the logical fallacy that the accomplishments of the arguer serve as evidence in favor of his or her claim, whether or not the claim is necessarily related to the area of accomplishment.

While it’s reasonable to take a close look at a proposition because Expert A claims it’s true, it’s important not to confuse that with the belief that Expert A is necessarily right.

Tuesday, November 22, 2011

Wikipedia Mon Amour

The Problem With Wikipedia (xkcd), Randall Munroe
http://xkcd.com/214/

My late uncle Jack Killheffer was science editor of the Encyclopedia Britannica in the 1970s and 1980s. I remember his office in downtown Chicago as a sea of papers. Piles of documents filled most of the floor space. I’ve never been a clean desk person, but his desk was an archeological dig. He was a chain smoker; his ashtray was the size of a small dinner plate and resembled a fireplace that hadn’t been cleaned regularly.

I thought he had the world’s coolest job.

I’ve always liked encyclopedias. Before Uncle Jack joined Britannica, we had an Encyclopedia Americana. I browsed through the volumes randomly throughout my childhood. I seldom forget what I read; I can still trot out all sorts of odd information gleaned randomly from encyclopedias.

The oldest surviving encyclopedia is Pliny the Elder’s Naturalis Historia. He hadn’t quite finished proofing it when he died in the eruption of Vesuvius in 79 AD. Given the Greek root of the word (ἐγκύκλιος παιδεία, “general education”), it’s clear his was not the first, and he certainly wasn’t the last. De Nuptiis Mercurii et Philologiae ("The Wedding of Mercury and Philologia"), first of the medieval encyclopedias, came out in either the 4th or 5th centuries.

The Arabic renaissance produced the Encyclopedia of the Bretheren of Purity; the Chinese in the 11th century released the Four Great Books of Song (Book 4, The Prime Tortoise of the Record Bureau, contained 9.4 million Chinese characters in 1,000 volumes).

The invention of printing triggered an explosion, including Chambers' Cyclopaedia, or Universal Dictionary of Arts and Sciences (1728), and the Encyclopédie of Diderot and D'Alembert (1751). Of the great encyclopedias of the 18th century, the Britannica is the oldest survivor, dating to 1768.

I’m proud to note that the first encyclopedia published in the United States was Dobson’s Encyclopedia (1788-1797), published by Philadelphia printer Thomas Dobson (no relation, alas). It was, for the most part, a rip-off of the 3rd edition of the Britannica, with various adjustments made to correct a British bias. Washington, Jefferson, Burr, and Hamilton all owned copies.

Although encyclopedias have a reputation for being objective, thorough, and reliable, accusations (some solid, some less so) of unfairness and inaccuracy have been leveled at just about all of them at one time or another. That’s unsurprising. In our long discussion of cognitive biases both here and here, we learned that misinformation and misperception were fundamental parts of the human condition.

Uncle Jack told me stories about the sensitive political negotiations that took place in pure science entries. People are passionate about facts and interpretations, and both are subject to argument. When I worked for the National Air and Space Museum (NASM), there were similar issues. The Smithsonian’s official position on the relative achievements of Wilbur and Orville Wright versus Smithsonian secretary and pioneer aviation figure Samuel Pierpont Langley is shaped in part by the terms of a contract between Orville Wright and the Smithsonian that contains the following:
"Neither the Smithsonian Institution or its successors, nor any museum or other agency, bureau or facilities administered for the United States of America by the Smithsonian Institution or its successors shall publish or permit to be displayed a statement or label in connection with or in respect of any aircraft model or design of earlier date than the Wright Aeroplane of 1903, claiming in effect that such aircraft was capable of carrying a man under its own power in controlled flight."
There’s been some minor controversy over the years as to whether NASM is unfairly taking sides, but the curators I knew assured me that as far as they were concerned, the facts of the matter lined up just fine with the language of the contract. The Langley claims should have been repudiated. The Wrights were first to fly.

Working in original-source history helped me develop a sense of how much of what we know is the result of a messy and imperfect process. We muddle our way to knowledge, and there’s nothing inherently wrong with that. We don’t have too many other options. Our knowledge not only isn’t perfect, it can’t be.

For that reason, I’ve often felt that the criticisms leveled against Wikipedia for inaccuracy and bias are excessive — though that doesn’t mean they’re wrong. All encyclopedias are crowdsourced; Wikipedia differs only in that the crowd isn’t paid. Well, not in money, anyway. There are many rewards in writing for an encyclopedia, not least the imprimatur of authenticity and accuracy conveyed by the brand name.

We all know that Wikipedia doesn’t count as a final authority — if you can’t confirm what you say from a more reliable source, no one will take you seriously. But for quick reference, an overview, a list of sources, and some basic preliminary data, it’s unbeatable. I probably use it 8-10 times every single day — four times for this article alone. If my factual need is trivial and extreme accuracy not necessary, that’s enough. When I need more, I dig deeper.

That’s true not only of Wikipedia, of course, but of any other source of information. All data — and even moreso, its interpretation — is suspect. Only by consulting multiple sources and striving for self-awareness of one’s own cognitive biases is it possible to arrive at some reasonable approximation of truth.

Wikipedia culture deservedly throws suspicion on its contributors. A neutral point of view is essential, but Wikipedia frowns most heavily on people receiving money for contributing to Wikipedia articles, ignoring many other sources of bias. There’s a large community of Wikipedia editors-for-hire (I know several of them myself), but the Wikipedia culture forces them to hide their conflicts rather than share them. Wikipedia vigilantes have been known to vandalize pages when a contributor is accused of taking money, but that doesn’t correct the problem, it makes it worse.

Bias is unavoidable, but the best cure for bias is sunlight. Wikipedia is large enough and important enough that it’s legitimate for people to earn money from it. There’s nothing new here; encyclopedia contributors pre-Wikipedia expected remuneration as a matter of course. It takes significant time, effort, and work to write and edit a good article. Volunteerism, as wonderful as it is, can only take you so far.

It’s important for bias and conflict of interest to be revealed, but not to be punished. The process of peer review and vigorous debate should be aimed not at expelling the biased, but rather toward greater accuracy, completeness, and consideration of all points of view.

Wikipedia is running one of its periodic fundraising campaigns now, and in the same way I contribute to public radio, I usually contribute to Wikipedia; I use it enough. I urge you to do the same. At the same time, I tend to think Wikipedia would do well to consider running Google-style ads; when done correctly, they add value to the search rather than corrupt it.

There’s nothing wrong with making money in the encyclopedia business. The most important thing is to get the information right.

Tuesday, November 15, 2011

The Ol' Yeller Maneuver (Managing Impossible Projects, Part 6 of 6)

The following series is adapted from a keynote I delivered at the Washington, DC, chapter of the Project Management Institute back in August. Parts also come from my book Creative Project Management (with Ted Leemann), published by McGraw-Hill. 

If You Build It, Will They Come?

Earlier, we discussed the story of the infamous automated baggage handling system at Denver International Airport (DIA), which burned through $250 million before being abandoned as unworkable.

There’s nothing inherently impossible about the concept of an automated baggage handling system, though obviously the implementation is tougher than it appears. No, this is the kind of project in which impossibility is situational: a function of the constraints. While we’ve focused on the Triple Constraints because of their universal application in project management, individual projects have other constraints as well.

The airlines themselves, oddly, had little initial involvement in the airport planning. This gave them substantial leverage later in the process. “If you build it, they will come” often carries a hefty price tag. In order to keep its costs down, United Airlines needed the baggage transfer system to take no longer than 45 minutes to route luggage among its flights.

In 1992, the automated baggage handling system was shoehorned into existing construction in what amounted to a “Hail Mary” play. In terms of project scope, the engineering involved amounted to a great leap forward from third-generation to sixth-generation technology.

Performance, obviously, was the project driver, with budget unavoidably the weak constraint. Significant cost and schedule overruns were guaranteed, and to a large extent acceptable — as long as performance goals were achieved.

So far, we have a very challenging project, but there’s no reason for a project manager to propose killing it. It’s not operationally impossible, and the value of closing the gap justifies a very high level of effort.

The Second Frog

The BAE project team officially recognized these key risks:
  • Very large scale of the project. 
  • Enormous complexity. 
  • Newness of the technology. 
  • Large number of entities to be served by the system. 
  • The high degree of technical and project definition uncertainty. 
The most important risk, however, was not mentioned: the complex stakeholder environment. The initial project was simply to serve United. DIA management expanded the contract to cover all terminals. DIA rejected the BAE proposal to build a 50,000 square foot prototype. Scheduling issues with other construction activities caused huge conflict.

There’s the old joke about the two frogs who fell into pots of water. One pot had hot water, and the frog immediately jumped in. The other pot was warming slowly, so the other frog felt no urgency about escaping until it was too late.

BAE was the second frog.

Politically Impossible

Because the project was not impossible from an engineering perspective, the fact that it became operationally impossible because of the constraints of the stakeholder environment tended to escape notice until it’s too late.

On the other hand, political problems aren’t exactly unheard of. Project managers are supposed to perform a stakeholder analysis. This isn’t just about figuring out your customers — it’s about analyzing the political landscape.

The earlier we identify a risk or problem, the more options are available. If you accompany the sales team when bidding on a job, don’t confine yourself to a study of the technical issues. As project manager, you’re going to have to spend your days dealing with the people, and you can’t tell the players without a scorecard. If you detect political dangers, they need to be part of your risk analysis for the job. This need to affect pricing and schedule, not just for your sake as project manager, but for the sake of the entire job.

If you get into the job and find that these issues are getting out of control, you likely don’t have the power to get out of the problem by yourself. You need allies, and you need them to figure out the problem for themselves. Most project managers see reporting (no matter how necessary) as something that takes time away from doing the work. Reporting, however, is a strategic tool to lay the information groundwork with your stakeholder community to bring them toward the correct understanding of the real situation.

The Ol' Yeller Maneuver

The best way to kill a project is to help the key stakeholders and decision makers reach the conclusion on their own, rather than you telling them. Remember that “operationally impossible” means you can’t figure out an answer. Leave open the possibility that someone else might have an answer you’ve missed. Sometimes they do have an answer for you. And if they don’t, they’re more likely to agree with your assessment.

Sometimes canceling a project is peaceful, sometimes bloody. This one ended with mutual lawsuits. That’s a powerful argument for acting early when the project is likely to be operationally impossible.

So let’s wrap up. A project is operationally impossible if you can’t do it within the stated constraints. There might be too little time, insufficient or wrong resources, or unrealistic or wrong performance criteria.

Sometimes the constraints can be changed, be made more flexible, or in some cases ignored altogether. If the constraints can’t be changed, perhaps you can work around them or accomplish the project in spite of its barriers. 

If the project’s still impossible — well, earlier I mentioned the idea of an American Movie Classics film festival of great project management movies. Apollo 13 is one candidate…but another is Ol’ Yeller. Sometimes a project manager’s job is to kill his own dog.

If the dog won’t hunt, and can’t be killed, the last solution is self-preservation. While captains are supposed to go down with their ships, we project managers are better off living to fight another day. 

The Three Envelopes

There’s the old joke about the outgoing project manager who left three numbered envelopes in a drawer, and told his successor that those envelopes contained the answers to the next three crises the project would face.

Inside the first envelope was a note that read, “Blame your predecessor.” The new person often has flexibility denied to the person previously holding the bag. You may have better luck challenging project assumptions and constraints.

Inside the second envelope, the note read, “Reorganize the department,” because — let’s face it — shuffling deck chairs on the Titanic is a long, noble tradition.

And in the third envelope, the note read, “Prepare three envelopes.” If in the final analysis the project really is impossible, it’s time to get while the getting is still good.

And that’s how to manage an impossible project.

The End.

Tuesday, November 8, 2011

Getting Around the Constraints (Managing Impossible Projects, Part 5)

The following series is adapted from a keynote I delivered at the Washington, DC, chapter of the Project Management Institute back in August. Parts also come from my book Creative Project Management (with Ted Leemann), published by McGraw-Hill. (Art by Baker and Hill Graphic Design.)



Managing Constraints

Constraints, operationally, are what stand between you and the completion of a successful project. If you think a given project may be impossible, it’s a function of the constraints you perceive. If the constraints (defined as the borders of the perceived box) can be modified, or if parts of it are optical illusions, then you may have new options available. The game has changed.

How can we change the envelope defined by our constraints?  Logic suggests two possibilities. If the constraints are real and have flexibility, you can modify them. If the constraints are imaginary, or have elements in them that are not real, you can get around them.  Multiple strategies exist for attacking each area, but they basically boil down to two: change the constraints or get around them.

(There is no "try.")

Change the Constraints
Analysis. Why is it your preferred option best (or sometimes least worst) for the organization? Does your preferred option cause collateral damage elsewhere in the project’s environment?  How much of this is political? How do other people view this concern? You have to understand the complete picture to see all the options, and just as importantly, to see all the dangers. 
Negotiation. Some constraints are subject to negotiation. If you’re bidding on a contract, there’s a price at which you can’t afford the business. On the other hand, sometimes our organization makes the choice on our behalf. “We’ve already got the contract, this is the scope of work, and this is how much we can spend to get it done.” Probe the constraints to see which are negotiable and which are fixed by circumstances.  
Internally, negotiation is the process of making the business case. If you have force majeure to settle the argument, it’s not really negotiation. In negotiation, forcing is not an option. You can only win if you are able to help other people recognize and accept a victory of their own. 
Problem solving. Sometimes constraints are decided, other times they simply are. When an organization prepares a budget, they necessarily make decisions among desirable objectives. They could give you more (or less) money; they choose not to. But sometimes the money isn’t there. They would choose to give you more money; they can’t. You can argue with decisions; you can’t argue (though many try) with facts. That’s a problem. Some problems can be solved. In the Apollo 13 case we discussed earlier, they needed a particular resource (a filter cover), but there was nothing at hand to do the job. Then someone remembered the astronauts wore socks. 
Requirements management.  There is an unfortunate sense in which written requirements too easily turn into holy writ. The purpose of requirements is to define operationally and specifically what the customer wants and wishes to pay for.  There’s always a delicate balance between imposing the detail necessary for control and allowing the flexibility necessary for exceptional achievement. 
Watch out for requirements that have outlived their usefulness, or had even become unproductive to the mission. A small change in a requirement may be of little consequence to the project’s quality, and still spell the difference between success and failure.
Get Around the Constraints
Creativity.  Here is where positive brainstorming rejoins the flow. Systematic creativity – inspiration on time, on budget, and on spec – seems like a contradiction in terms, but professionals in many areas do it as a matter of course. The secret goes back to Thomas Edison’s famous ratio of one percent inspiration and 99% perspiration: creativity is something you can work at. Artists do rough sketches; writers do rough drafts; lightbulb inventors test filament after filament. It’s a process of discovery. As the old joke goes, Michelangelo created David by taking a big block of marble and chipping away all the pieces that didn’t look like David. 
Exploiting holes.  One of the tricks of structured creativity is understanding that some places are more likely to contain insights than others, and look there first. The flexibility of the weak constraint is one good source of insight. So is available slack or float on non-critical tasks. Weaknesses and cracks in the structure of constraints may be exploitable. 
Different approaches. Insanity, Ross Perot famously observed, is doing the same thing over and over again and keep expecting different results. Is there a way around your current obstacle if you switch approaches? 
Rethink assumptions.  Assumptions can err on the side of optimism or pessimism. Conduct a sensitivity analysis of your assumptions: if it turns out to be true or false, how much impact will it have on your project? Investigate the assumptions with the most potential.

But sometimes a project really needs to die, and the project manager is often the one dispatched to do the dirty deed. There’s a skill to this, as well.


Next Week: The Ol' Yeller Maneuver

Tuesday, November 1, 2011

Orange Ropes (Managing Impossible Projects, Part 4)

The following series is adapted from a keynote I delivered at the Washington, DC, chapter of the Project Management Institute back in August. Parts also come from my book Creative Project Management (with Ted Leemann), published by McGraw-Hill. 




Sometimes in organizations there is a form of relentless optimism, the corporate cheerleader equivalent of “failure is not an option.” I appreciate the motivational value of positive thinking, but there’s a huge brainstorming value that’s often overlooked in negative thinking. You can’t figure out which constraints may be illusions until you make a list of constraints in the first place.

When people perceive the job they have been given is impossible, or the conditions under which they must labor are unfair or insufficient, it’s usually not a good idea to push optimism down their throats — too often, it backfires and makes the situation worse. It’s no use telling people not to think of negative things, it’s like me telling you not to think about the color orange or about an elephant.

(Are you thinking about an orange elephant now?)

Orange Ropes 

Speaking of orange elephants, I don’t know how many of you have ever trained an elephant, but if you want to train an elephant you start with a baby elephant and an orange rope. The color is very important.

You tie the orange rope around the leg of the baby elephant, and fasten that to a stake in the ground. The baby elephant tries to get away, but he can’t break that orange rope.

Over the years, the baby elephant grows to full size, but he’s learned his lesson: you can’t break an orange rope. Now, you can take some flimsy, rotten rope, spray paint it orange, and the elephant will treat it as unbreakable. But if the elephant ever figures it out…he’s free.

Some constraints are fake, and others are real…but they change. Experience can be a wise teacher, or it can blind you to a new and different reality. That’s why I believe that negative brainstorming is a hugely overlooked tool for managing difficult or impossible projects.

Negative Brainstorming

A negative brainstorming process works just like a conventional brainstorming session. Participants offer potential ideas on a specific topic with no criticism or evaluation of ideas or suggestions allowed. The major difference in negative brainstorming is that the specific topic —and the focus of ideas — is negative.

In conventional brainstorming, the focus is on finding creative ways to solve the problem. In negative brainstorming, the focus is on finding all the obstacles, barriers, and events, including internal, external and self imposed, that could prevent completing the project as currently defined.

Here’s a list of good questions to get a negative brainstorming session started.

  • Why is this project impossible?
  • What are all the things we can’t possibly do?
  • What are all the things others can do that will prevent us from accomplishing this project?
  • What ideas can we think of that absolutely are not worth trying?
  • What’s the worst possible decision we could make right now?
  • What could we do to turn this project into a complete catastrophe?
  • Why are we doomed to fail?

In asking a negative question, I don’t mean to imply that the questions are necessarily accurate descriptors of reality. They don’t have to be. What the questions have to do is to correspond to the cognitive biases that keep us from finding a solution.

Doomed, But Hopeful

We may not in fact be “doomed to fail,” but a negative brainstorming exercise on “Why are we doomed to fail?” is a powerful way to bring the most serious risks and issues to the surface where our team can deal with them.

Negative questions like these can be utilized with all sorts of brainstorming processes, or techniques. Some approaches include having the participants respond in a round-robin style. Another approach is a simple free-for-all where participants offer ideas randomly. The leader can set a time limit or a target total number of ideas before concluding the process. The important thing is to concentrate on finding all the negative possibilities, rather than stop and try to solve the barriers as they are identified during the brainstorming phase of the process.

In negative brainstorming, it’s vitally important to encourage participants to offer even the most outrageous possibilities that could negatively impact the project. Our goal is to elevate concerns from the subconscious background into the conscious spotlight of project management, and we can only do that if we recognize what they are in the first place. If people feel criticized for stupid suggestions, the total number of suggestions will go down, including the not-stupid ones. That’s why, as in all brainstorming processes the initial phase is to gather ideas — not solve problems or criticize specific contributions.

After completing the negative brainstorming session, the evaluation process begins by taking each negative idea in turn and determining (1) if you can overcome the obstacle, (2) if so, how, and (3) if not, what then?

At least some (perhaps most) of the constraints, barriers, and issues you identify will turn out to be both real and solid. That’s completely normal. You are looking for the exceptions. In positive brainstorming, most ideas turn out to be of limited utility, but if you get one winner it can be a game changer.

In negative brainstorming, if most constraints turn out to be solid, but there are exceptions, the project can go from impossible to possible — occasionally even easy — in the blink of an eye.


Tuesday, October 25, 2011

The CHAOS Report (Managing Impossible Projects, Part 3)





Photo: Baggage system, Denver International Airport


The following series is adapted from a keynote I delivered at the Washington, DC, chapter of the Project Management Institute back in August. Parts also come from my book Creative Project Management (with Ted Leemann), published by McGraw-Hill. 



The annual Standish Group’s CHAOS Report, which tracks software project performance reported these abysmal numbers for 2009:

  • 32% on time, on budget, to spec
  • 44% finished late, over budget, or partially completed
  • 24% failed, cancelled, or abandoned

Why do so many projects fail, either in part or in whole?

Certainly, there are badly managed projects — even some headed by PMPs. But it’s hard to blame a 68% defect rate on poor practitioners alone.

Sometimes the problem is the definition of success. By the logic of project management, the Leaning Tower of Pisa was an abysmal failure: late, over budget, and…well, aren’t buildings supposed to be straight? The Standish Group would clearly label the tower as a failed project, yet if the tower didn’t lean, who today would bother to visit?

The third reason, of course, is that the project was problematic to begin with — arguably or actually impossible. This can happen for a variety of reasons. In the case of Apollo 13, the constraints of time, resources, and performance were established by the situation, not by the will or desire of the project managers or owners. They were what they were — whether they were achievable was a separate issue.

In the Battle of the Bulge, the German Ardennes offensive had already begun. We would be able to get troops to the front to relieve the pressure on beleaguered First Army units, or we wouldn’t. Neither project managers nor project owners necessarily control the constraints that drive their projects.

Sometimes we’re just making a guess when we establish a project’s constraints. The original budget for Denver International Airport was $1.2 billion, and the original deadline was October 1993. The final price came in at $4.8 billion and the actual opening date was February 1995. (The infamous automated baggage system, budgeted at $186 million, wasn’t cancelled until 2005, by which time its construction costs were increasing at a staggering $1 million a day!)

Denver wasn’t an example of engineering or technical failure — no, not even the baggage system. The failure was driven by political considerations, including a nonstop war among several key stakeholders. When customer conflict generates mutually exclusive requirements, “impossible” becomes just another word for nothing left to lose.

In any event, we project managers are hired hands. Sometimes we may do double duty as customers or sponsors of our own projects, but when we put on our PMP hats, we’re here not to decide, but to execute. We are bound by the decisions and choices of others, and sometimes we start the project on the precipice of failure. After all, how many of you get to decide your own timelines, set your own budgets, and establish your own performance requirement?

I didn’t think so.

Still, you don’t want to be too quick to pull the trigger. Let’s imagine that you have a project and your experience tells you it can’t be done. Isn’t delaying the inevitable bad news just going to make the problem worse?

That depends on what you do in between receiving the assignment and giving the answer. Even if your project’s impossible, or at least compromised, there’s still a customer problem needing to be solved. Telling people what they can do and what they can have tends to get a better reception than telling people what they can’t do and what they can’t have.

That’s why the first step in managing a potentially impossible project is analysis. When you analyze an apparently impossible or potentially impossible project, you may learn different things.


  1. You confirm that the project is in fact impossible, and can provide evidence to the customer. You and the customer can begin to figure out what alternatives may exist or how to deal with the consequences of an unsuccessful project.
  2. You confirm that the project as originally proposed is in fact impossible, but are able to find potential changes that will make the project possible, which you can present to the customer.
  3. You confirm that the project as stated is in fact impossible, but are able to offer alternatives and compromises that might satisfy at least some of the customer’s requirements and needs, or close part of the gap.
  4. You can’t confirm that the project is in fact impossible, but you can identify at least some of the risks and challenges you face, which you and the customer can then assess.
  5. You find a creative way around the barrier that made the project impossible, and achieve the original goal.

Even if your analysis leads to the first outcome (it’s just flat impossible), however, your situation is still improved by your ability to give a thoughtful reply with supporting evidence, and your attitude in making a good-faith attempt to solve the problem.

Partial successes (outcomes two, three, and four) are a marked improvement. Even if the project is impossible — or highly risky — as stated, the customer may be able to get a significant portion of what he or she wants. Plus, it’s well known that the first approximation of available constraints may not be the final word. There may be more to draw on. And again, people tend to react better to hearing what they can have, and less well to hearing what they can’t have.

We renovated our house last summer, and the project was on time, exceeded our expectations — and cost about twice as much as we’d planned. We still call it a rousing success, because we never really expected to meet the budget anyway. It was a hope, not a realistic assessment. The Standish Group would call it a failure, but we don’t — and the customer is always right.

The fifth outcome (solve the problem with creativity) is ideal, but often challenging and not always successful.  The best direction to find the creative answer is, paradoxically, to focus on the barriers in the first place.



Next Week: The Power of Negative Thinking!

Tuesday, October 18, 2011

Potentially Impossible (Managing Impossible Projects, Part 2)

The following series is adapted from a keynote I delivered at the Washington, DC, chapter of the Project Management Institute back in August. Parts also come from my book Creative Project Management (with Ted Leemann), published by McGraw-Hill. 


The Most Dangerous Word

At the outset of the project, you may not always know whether the project is possible or not. That’s why the process of managing an impossible — or potentially impossible — project begins at the outset, during the very first stages of project initiation.

Here’s one of Dobson’s Laws of Project Management: The most dangerous word in project management is a premature “yes.”

Premature certainty, whether it’s positive or negative, can backfire. Saying “yes” before you really know what you’ve said “yes” to can result in a world of trouble.

You can also get in a world of trouble by being too quick to say “no.” Even if your experience and wisdom tell you the project’s impossible on the face of it, saying so too quickly will produce a negative reaction. And, frankly, sometimes “no” is just not going to be an acceptable answer.

When you say, “Sorry, that’s impossible,” they think, “You didn’t even try!”

Failure, Alas, Is an Option 


Let’s set the Way-Bac machine for April 14, 1970, just about 56 hours into the flight of Apollo 13. With the spacecraft about 200,000 miles away from Earth, Mission Control asked the crew to turn on the hydrogen and oxygen tank stirring fans. About 93 seconds later there was a loud “bang.” Oxygen tank #2 had exploded.


If American Movie Classics ever ran a series of “Great Project Management Movies,” surely Ron Howard’s Apollo 13 would be a natural candidate. Faced with a potentially disastrous accident, project teams overcome one potentially fatal barrier after another to bring the crew safely back to earth, guided by mission director Gene Kranz’s mantra, “Failure is not an option.”

But of course failure is an option, and in the case of the Apollo 13 mission, the odds were heavily stacked against a happy outcome— and everybody (including Gene Kranz) had to be well aware of that fact. Within the overall project “get the astronauts home safely,” there were numerous subprojects, including:

  • Develop a power-up sequence that draws fewer than 20 amps
  • Calculate a burn rate to get the reentry angle within tolerance using the sight of the Earth in the capsule window as the sole reference point
  • Design a way to fit the square command module carbon dioxide scrubber filter into the round Lunar Excursion Module (LEM) filter socket.

That last subproject was vital, because the LEM’s carbon dioxide scrubbers were designed to take care of he needs of two people for a day and a half, not three people for three days. And nobody ever imagined that the command module scrubbers would need to be used in the LEM, so they weren’t designed to be compatible. They’re square, and the necessary holes are round. Meanwhile, the carbon dioxide levels are already past 8, and at 15 things become dangerous, and eventually deadly. As the engineers gather in a conference room, boxes of miscellaneous junk — basically everything that’s loose on board the spacecraft — are being dumped on tables.

Managing a Crisis Project

Let’s look at the project management problem.

It’s easy to build a carbon dioxide filter on Earth; there’s a standard specification, a deadline measured in weeks, if not months, and all the resources you need are easy to acquire. In a crisis situation, such as existed aboard Apollo 13, the project looks a little different.

At the beginning of the project, the engineers involved could not know whether the project would turn out to be ultimately impossible. Impossibility could exist in any of the three fundamental constraints.

  • Does the time available to accomplish the project equal or exceed the time necessary? 

In developing a replacement for the Apollo 13 mission’s overloaded carbon dioxide filter, engineers were constrained by the amount of time until the astronauts became too impaired to build what they designed. If the deadline turns out to be too short, then the project is impossible.

  • Are the resources needed to accomplish the project less than or equal to the resources available? 

The project was constrained by what was actually available on the spacecraft. If their resources are short by even one critical component, no matter how small — a 20¢ screw — the project is impossible.

  • Are the performance criteria achievable within the outer boundaries of the other constraints? 

If the improvised filter can’t be made to work long enough for the astronauts to reach Earth orbit when they can return to the command module, then the project is impossible.

As we all know, the Apollo 13 engineers did come up with a workable solution, but that was hardly guaranteed. Had the constraints been slightly different — less time, fewer resources, more challenging performance standards — the outcome would likely have been failure.

But that doesn't mean you shouldn't try.


Next Week: The CHAOS Report!

Tuesday, October 11, 2011

If I Had a Lever (Managing Impossible Projects, Part 1)

The following series is adapted from a keynote I delivered at the Washington, DC, chapter of the Project Management Institute back in August. Parts also come from my book Creative Project Management (with Ted Leemann), published by McGraw-Hill. 



If I Had a Lever...

When we say “nothing’s impossible,” we usually mean that given unlimited time, unlimited resources, and really flexible performance standards, we can do anything. “Give me a lever long enough and a platform to rest it on, and I will move the world,” said Archimedes, but he was obviously not a project manager. Our projects are constrained: the iron triangle of resources, time, and mandatory scope are only three of the dimensions that restrict our options.

There are many types of impossibility: legal impossibility, scientific impossibility, metaphysical impossibility, and logical impossibility, to name a few. Each has its own definition and its own specific context. Our question is more focused: what does “impossible” mean in the context of project management — and more importantly, in the context of your project?

We define a project as “operationally impossible” if it cannot be accomplished within the boundaries of its mandatory constraints. Of course, seemingly “impossible” projects succeed all the time, and there are a number of proven strategies that work — at least part of the time.


  • Sometimes a team discovers a brilliant critical insight, or is simply smart enough and good enough to achieve what lesser mortals cannot.
  • Sometimes the team gets it done by sheer Herculean effort, working harder and longer than anyone expects. The project succeeds, but sometimes the organization pays a long-term price.
  • Sometimes the team gets it done, but the outcome is compromised. Maybe the project cost more, or took longer, or did less. Sometimes we can point to the corpses of the projects we sacrificed in order to make the current project succeed
  • And sometimes the team fakes it, slaps a coat of paint on it, and hopes nobody notices that the wheels have fallen off. (Make sure your résumé is up to date first.)


There are, fortunately, some better answers, and in this and the next few blog posts we’ll journey through history to see what lessons we can pick up.

Patton and the Bulge

Let’s set the Way-Bac machine for December 19, 1944, where the Allied High Command is meeting in Verdun to plan its response to the German offensive Wacht am Rhein, popularly known as the Battle of the Bulge. 

Elements of the United States First Army, supposedly in a quiet sector of the front, are pinned down in Bastogne. Eisenhower asks the assembled generals how long it will be before Allied forces will be able to relieve the beleaguered Americans at Bastogne. British Field Marshal Bernard Montgomery says it will take weeks.

George Patton, commander of the United States Third Army, jumps in. . (Patton, by the way, sounded a lot less like George C. Scott than he did like Ross Perot.)

“I can attack with three divisions in 48 hours,” he says.

The response from the other generals was not polite.

Patton’s boss, General Omar Bradley, was not amused. “Ike wants a realistic estimate, George. You’re in the middle of a fight now. It’s over a hundred miles to Bastogne.”

They were, of course, right to be skeptical. Extricating three divisions from a tough fight and moving them 100 miles in 48 hours?  That’s not just difficult, that’s downright impossible. 

Let’s see why.

A division is an Army unit consisting of approximately 15,000 soldiers, along with everything they need to do their job. Imagine picking up a town of 45,000, along with all the services needed to keep them going, and moving 100 miles in 48 hours…and forget the interstates; there aren’t any. Just for starters, if you don't have a detailed movement plan, you'll end up with the world's biggest traffic jam. 

Armored vehicles are gas-guzzlers, people have to eat, and soldiers need ammunition. That means you'll have to pre-position gas, food and supplies along the route. 

A moving division is more vulnerable than a division on defense. That means you need fighting units to protect moving units, and they need more gas and food and ammunition.

A move of this nature requires a planning staff in the hundreds. In World War II, without cell phones, laptops, and GPS units, orders were typed on mimeograph stencils, duplicated, and hand-carried to unit commanders stretched out over an immense area. Today's technology is far superior, but so are the demands involved.

It takes weeks to pull off an operation like this. It really can’t be done in 48 hours. It’s an impossible project — flat out impossible.

And yet it was done.

But wait a minute. If it was done, then wasn’t it by definition possible?

Like with any good magic trick, the key often lies in challenging your assumptions. Eisenhower’s headquarters learned about the German Ardennes offensive late in the game, and that’s why Patton needed to move within 48 hours. But Patton, alone among senior Allied commanders, had anticipated the possibility, deployed his own intelligence resources, identified the threat, and bought himself the extra time he needed. He didn’t do it in 48 hours. He changed the constraints.

There’s a scene earlier in the movie in which he instructs his staff to begin planning the move northward. His staff had several weeks to prepare three different contingency plans. All Patton had to do when the meeting broke up was walk downstairs to his jeep, call his headquarters on the radio, say “Nickle,” and the forces were on their way. (His driver, Sergeant Mims, reportedly said, “I don’t know why they need all them other generals. You and me can run this whole war out of your jeep.”)

If a project can’t be done within its constraints, one obvious approach is to follow Patton’s example, and alter the constraints, but of course, that’s not always an option. Sometimes, the project is inherently likely to fail — but that doesn’t mean you don’t still have to manage it.

Next week — Failure IS An Option!

Tuesday, October 4, 2011

Riddikulus! (Part 14 of Fallacies)

More red herrings, argumentative fallacies that distract from the argument rather than address it directly. This week, the final three appeals to emotion: the appeal to ridicule, the appeal to spite, and wishful thinking.

Reductio Ad Ridiculum

Riddikulus! As all Harry Potter fans know, the way to defeat a boggart is to convert it from an object of terror to an object of mockery. While the spell clearly works, in real life, the appeal to ridicule is a type of red herring fallacy in which the opponent presents the original argument in a way that turns it into a mockery of itself, either by emphasizing the counter-intuitive aspects of the original argument, or by creating a straw man to debunk it.

An example of the first approach is the argument, “If Einstein's theory of relativity is right, that would mean that when I drive my car it gets shorter and more massive the faster I go. That's crazy!” It’s also true. The problem is that the effects are not easily measured at automobile speeds, but only become significant as the object nears the speed of light.

The second approach misrepresents the argument in order to ridicule it. “If evolution were true, that would mean that all the apes wouldn't be here any more, since they all would have evolved into humans!” That’s ridiculous indeed — but it’s not actually implied or stated in the Theory of Evolution.

Argumentum Ad Odium

The appeal to spite exploits existing bitterness or dislike in its attack. The various attacks on union benefits (such as retirement), particularly in government workers, relies on the negative emotions aimed at the target group as the primary justification for cutting back or cancelling previously agreed-upon benefits. “Why should people enjoy a comfortable retirement with my tax dollars?”

Wishful Thinking

Wishful thinking is based on the premise “I wish P were true/false, therefore P is true/false.” You see this in a lot of superstitious behavior, from chain letters to the belief in UFOs. Personally, I think it would be really cool if aliens did in fact visit Earth — but that doesn’t make it true.

Tuesday, September 27, 2011

Making the Kessel Run in Under 12 Parsecs

From xkcd, by Randall Munroe.

On the OPERA neutrino experiment, the conservative war on relativity, and the operation of science (with a nod to the biggest science boo-boo in Star Wars).

One can understand, if not necessarily agree with, the conservative war on the Theory of Evolution. If you believe that a specific book must be taken literally, and that book states that the Earth was created in seven days, then it is impossible for Darwinian evolution to have taken place. Either the book in question is not literally true, or the theory must be wrong. You can’t have both.

I must admit, though, to being perplexed by the conservative opposition to the Theory of Relativity. I first encountered this when one of my White House speechwriter friends asked me to review an unpublished manuscript ostensibly debunking Einstein. (It was written by a political science major.) The book was filled with bold assertions, dismissal of contrary evidence, and an ongoing hint that relativity was enthroned not because of its merits as a theory, but because of an ongoing conspiracy by what xkcd refers to as the “Science Thought Police.” In other words, it was just like the anti-evolution arguments.

The objection to relativity seems to rest on a semi-religious foundation: the idea that reality is in some sense objective, without uncertainty or variability. There’s an existential threat in the idea that some parts of reality are subjective, variable, or…well, relative. More importantly, the objection goes to the heart of the idea of science itself, the idea that the experimental method, peer review, and testable hypotheses can separate the valid from the invalid. For those whose authority rests on a foundation of “truth,” testability can be quite inconvenient.

Which brings us to the OPERA neutrino experiment, in which the underground Gran Sasso Laboratory measured the velocity of neutrinos from CERN at moving slightly faster than the speed of light (c). The difference is not great — 60 nanoseconds faster over a distance of 730 kilometers (with an expected error of ±10 nanoseconds) — but any evidence that a particle can exceed the speed of light forms a huge challenge to the foundations of modern theoretical physics. (For the curious, the official paper can be found here.)

Not so fast. The researchers themselves have not made any claims so bold. Before publishing, they looked for experimental errors. They re-ran the experiment. They looked at a variety of potential explanations and controlled for as many variables as they could. By publishing the results, they aren’t making the claim that they’ve refuted Einstein, but rather quite the opposite: they are appealing to the scientific community to repeat the experiment and see if they can discover why the OPERA results are incorrect.

That’s as it should be. Given the overwhelming body of evidence post-Einstein that confirms the principles of relativity, an anomalous result deserves to be treated with skepticism. Extraordinary claims, as they say, require extraordinary proof.

In fact, there’s already a strong experimental case that undercuts the OPERA results. On February 23, 1987, light from a supernova in the Large Magellanic Cloud, one of the two dwarf galaxies that orbit our own Milky Way, reached Earth after traveling approximately 168,000 light years. It was the closest supernova to Earth since Kepler’s Supernova of 1604, and the first since then to be visible to the naked eye.

About three hours before the visible light from SN1987A reached Earth, neutrino bursts were observed at three different neutrino observatories. This doesn’t mean those neutrinos traveled faster than light; the visible light produced by a supernova’s collapsing core has to travel upward to the stellar surface, while neutrinos zip right through the intervening material. If those neutrinos were traveling as fast as the OPERA observations indicate (60 ns per 730 km), the neutrino bursts should have arrived nearly four years before the visible light.

What, then, do the OPERA results mean? One possibility — the likeliest, according to most blogging scientists I’ve been reading in the past few days — is that a measurement error did in fact occur. If there’s not a measurement error, and it turns out that the neutrinos do move slightly faster than our current measurement of c, it still won’t “refute” Einstein -- our GPS units (which rely on relativity equations) still work. Einstein, after all, didn’t “refute” Isaac Newton. Newton’s laws work quite well in the majority of cases. It’s only at extreme speeds and under unusual conditions that relativistic differences matter.

What it will mean, if the results hold up under scrutiny, is that adjustments will be made. The models we use to understand the universe will require modification, becoming more accurate. There’s a small chance we’ll discover some important breakthrough that changes our understanding of the universe in materially significant ways.

In that case, the better parallel will be the 1887 experiments by Albert Michelson and Edward Morley involving the measurement of the speed of light in different directions, with the goal of measuring the motion of the Earth with respect to the luminiferous aether. They also didn’t get the results their theory predicted. The difference here was also small enough to be attributed to experimental error, but over time, as the results became more and more solid, it finally became clear that there was something wrong with the aether hypothesis. 

Although Einstein’s theory of relativity did not rest on “science’s most famous failed experiment,” Michelson-Morley did serve as evidence that helped the new theory of a constant speed of light gain widespread acceptance.

Tuesday, September 20, 2011

Dives and Lazarus (Part 13 of Fallacies)

More red herrings, argumentative fallacies that distract from the argument rather than address it directly.

Argumentum ad crumenam

If you’re so damn smart, why ain’t you rich?

The argumentum ad crumenam, or argument to the purse, suggests that the truth of the proposition can be supported by the wealth of the speaker. If you’re so smart, why ain’t you rich? In other words, if you’re rich, you must be smart.

The rebuttal to this argumentative red herring can be made in only two words:

Donald Trump.

Argumentum ad lazarum

The reverse is known as the appeal to poverty. It takes its name from the parable of the rich man and Lazarus (Luke 16:19-31), in which the rich man suffers the torments of Hades while the beggar Lazarus enjoys the delights of heaven.

While there’s significant Biblical support for the comparative virtue of poor versus rich (see Matthew 19:24, “And again I say to you, It is easier for a camel to go through the eye of a needle, than for a rich man to enter into the kingdom of God.”), virtue and logical argument don’t necessarily correlate. If it’s not necessarily true because a rich person says it, it’s no more true if a poor one does.

Tuesday, September 13, 2011

Known by the Company We Keep (Part 12 of Fallacies)

In the next part of our continuing survey of red herrings (responses to arguments that don’t address the actual argument but merely distract from it, we’ll look at the two types of association fallacies: guilt by association and honor by association. Depending on your point of view, they can be one and the same.

Guilt by Association

Association fallacies take the following form: (1) A is a B. (2) A is also a C. (3) Therefore, all Bs are also Cs. (More formally, (∃x ∈ S : φ(x)) → (∀x ∈ S : φ(x)), which means “if there exists any x in the set S so that a property φ is true for x, then for all x in S the property φ must be true.”

Of course, that’s not at all a necessary condition. The classic rebuttal goes like this:

(1) All dogs have four legs.
(2) My cat has four legs.
(3) Therefore, my cat is a dog.

The PolitiFact Truth-O-Meter recently gave a “Pants On Fire” rating to an August 17 blog post by Texas radio host Dan Cofall, which read in part, “The magic number ‘70’ is the number of members of the 111th Congress who are members of the Democratic Socialists of America (DSA). These are not just politicians who vote left of center; these are card-carrying members of ‘The Democratic Socialists of America.’” (The "70" to which Cofall refers is the membership of the Congressional Progressive Caucus — though I do not believe they actually issue membership cards.)

There are two guilt-by-association attacks here, one direct and one indirect. The indirect attack is in the term “card-carrying,” an echo of McCarthy-era HUAC anti-communist campaigning. The implication goes like this:

(1) Dues-paying members of the Communist Party carry cards.
(2) Members of Group X (Democratic Socialists, ACLU, etc.) carry cards.
(3) Therefore, members of Group X are Communists.

The direct attack takes this form:

(1) The  Democratic Socialists of America have a platform with a number of ideas.
(2) Some members of the Congressional Progressive Congress have ideas that overlap with some items on the DSA agenda.
(3) Therefore, all members of the Congressional Progressive Congress are "card carrying" members of the Democratic Socialists of America.

Of course, liberal Democrats and Tea Party members also share some specific ideas (they both like the idea of voting, for example), but it hardly follows that all liberal Democrats are Tea Party members, or vice versa.

The Democratic Socialists are somewhat chagrined. “If we had formal political relationships with 70-odd members [of Congress], we would be making a lot more money” from dues. And as far as they’re concerned, the problem with the Congressional Progressive Congress is that the members aren’t nearly socialist enough — they prefer a third party movement.

Whether it’s “guilt by association” or “honor by association” may depend on your point of view. When Bill O’Reilly said on his January 19, 2005, broadcast, “Hitler would be a card-carrying ACLU member. So would Stalin. Castro probably is. And so would Mao Zedong,” I decided to look at it this way:

(1) Bill O’Reilly and George Bush say bad things about “card carrying” ACLU members.
(2) I think O’Reilly and his fellow-travelers are jackasses.
(3) Therefore, I joined the ACLU…just so I can carry my card.

Tuesday, September 6, 2011

Hunt/Liddy Special Project 1 (Watergate Part 3)


Here’s the third part of my occasional series tracing the 40th anniversary of the Watergate scandal.

As noted previously, a big motive in the Watergate cover-up had nothing to do with the actual burglary, but rather with the previous activities of the White House Plumbers Unit. Their first operation, “Hunt/Liddy Special Project 1,” was part of the Nixon Administration’s response to the leaking of the Pentagon Papers by one of its contributors, Daniel Ellsberg, Ph.D. (Ellsberg has appeared in this blog before, in our discussion of the ambiguity aversion effect, better known as the Ellsberg paradox.)

Ellsberg, a former military analyst employed by the RAND Corporation, was one of 36 members of the Vietnam Study Task Force, established by then-Secretary of Defense Robert McNamara to produce an “encyclopedic history of the Vietnam War.” The report “United States—Vietnam Relations: 1945-1967,” was so secret that it was kept from President Lyndon Johnson, Secretary of State Dean Rusk, and National Security Advisor Walt Rostow. The final report contained 3,000 pages of historical analysis and 4,000 pages of original government documents published in 47 volumes. It was classified “Top Secret — Sensitive,” meaning that the reason for its classification was that the publication of the study would be embarrassing. The print run was 15 copies.

In October 1969, Ellsberg, who had grown to oppose the Vietnam War, along with Anthony Russo, photocopied the study and showed it to Henry Kissinger, William Fulbright, George McGovern, and others. None was interested. It was not until February 1971 that Ellsberg first discussed the report with New York Times reporter Neil Sheehan. In March, Ellsberg gave Sheehan 43 of the 47 volumes, and the Times began publishing excerpts from the study starting in June 1971. At that time, the nickname “Pentagon Papers” first came into use.

The reaction was much the same as that which followed the WikiLeaks disclosure of State Department cables. While numerous claims of damage to US military and intelligence operations gained headlines, the reality was that the Papers talked about events that had happened years before. Nixon Solicitor General Erwin N. Griswold later called the Papers an example of "massive overclassification" with "no trace of a threat to the national security.”

The Papers effectively became public knowledge when Senator Mike Gravel (D-AK) entered 4,100 pages from the report in the Congressional Record. Richard Nixon originally wasn’t interested in prosecuting Ellsberg or the Times, because the study only embarrassed the Johnson and Kennedy administrations, but Henry Kissinger argued that this would set a negative precedent, and Ellsberg and Russo were prosecuted under the Espionage Act of 1917.

In order to come up with more evidence to discredit Ellsberg, the Plumbers received their first mission, which took place on September 3, 1971. It was a burglary operation, targeting the office of Daniel Ellsberg's Los Angeles psychiatrist, Lewis J. Fielding. The break-in team reported they couldn’t find Ellsberg’s file, but Fielding himself later said that not only was the Ellsberg file in his office, but he had also found it on the floor the morning after the burglary. Someone had clearly gone through it.

John Ehrlichman, Assistant to the President for Domestic Affairs, reported to Nixon, saying (on tape), “We had one little operation. It’s been aborted out in Los Angeles which, I think, is better that you don’t know about.” Later, when the whole story came, out, the case against Ellsberg turned into a mistrial because of government misconduct, and all charges were dismissed.

What’s fascinating to me in all this is how unsuccessful the Plumbers Unit actually was. Far from achieving its goal of discrediting Ellsberg, the burglary (and related wiretapping) actually contributed to the dismissal of the case.

You can download the complete Pentagon Papers (including the parts Ellsberg didn't release) from the US National Archives here.

For more on the Gang That Couldn’t Shoot Straight, stay tuned.

Tuesday, August 30, 2011

Queen for a Day (Part 11 of Fallacies)


Fallacies involve incorrect or invalid reasoning. Red herrings are a category of fallacy in which the response to an argument doesn’t address the argument, but rather offers a distraction from it. One class of red herrings consists of appeals to emotion, in which a given feeling is used as the evidence for or against a given proposition.

Argumentum Ad Misericordiam

“Would YOU like to be Queen for a day?”

The forerunner to today’s reality show epidemic, Queen for a Day, premiered as a radio show in 1945, only moving to television in 1956, and lasted until 1964. The format involved three different women talking about financial, health, or other emotionally gripping hard times they had recently experienced, and what they most needed to deal with it — medical care, therapeutic equipment, or a major appliance. An applause meter registered the level of sympathy, and the winner had her wish granted, along with other merchandise. (The runners-up also received prizes; no one went away empty handed.)

The appeal to pity (argumentum ad misericordiam) is a red herring fallacy because it doesn’t in itself prove or disprove the proposition at hand. The contestant with the worst problems or greatest need isn’t necessarily the one whose needs are greatest — the winner is the one most able to win the audience’s sympathy.

Pleading with the teacher for a better grade because an “F” means you can’t be on the football team doesn’t mean you deserve a better grade — but that’s not to say the appeal to pity isn’t effective, or that it’s automatically wrong to make or change a decision because of pity. As dustman-philosopher Alfred P. Doolittle so artfully argues in Pygmalion:
I ask you, what am I? I'm one of the undeserving poor: that's what I am. Think of what that means to a man. It means that he's up agen middle class morality all the time. If there's anything going, and I put in for a bit of it, it's always the same story: 'You're undeserving; so you can't have it.' But my needs is as great as the most deserving widow's that ever got money out of six different charities in one week for the death of the same husband. I don't need less than a deserving man: I need more. I don't eat less hearty than him; and I drink a lot more. I want a bit of amusement, cause I'm a thinking man. I want cheerfulness and a song and a band when I feel low. Well, they charge me just the same for everything as they charge the deserving. What is middle class morality? Just an excuse for never giving me anything. Therefore, I ask you, as two gentlemen, not to play that game on me. I'm playing straight with you. I ain't pretending to be deserving. I'm undeserving; and I mean to go on being undeserving. I like it; and that's the truth.
He won’t win Queen for a Day, but it’s a fine argument nonetheless.

In the law, appeals to pity are not supposed to be made during the trial (though you can sneak it in if you can camouflage it as part of another argument), but they're completely appropriate during sentencing.

Tuesday, August 23, 2011

Ego Up Ego Down (Part 10 of Fallacies)


Within the world of red herrings, there’s a subcategory of appeals to emotion. (Red herrings are a category of fallacy in which the response to an argument doesn’t address the argument, but rather offers a distraction from it.) Some of the red herrings we’ve covered, such as an appeal to tradition or consequences, fall into this category. Unlike the ones we’ve covered already, these don’t have an official-sounding Latin title, just plain old English.

Appeal to Flattery

You, dear reader, are clearly someone whose interest in creativity, your orientation toward substantial accomplishment, and your bright shining intelligence is something I hold in awe.

You’re probably one of a very small number of people in the world capable of grasping my newest project management tool in all its ramifications. In capable hands — so terribly rare — these secrets make you unstoppable. People with only normal intelligence will surely fail.

For only twelve small payments of $19,999.99, you can be one of the select leaders of the project management community of the future. Don't you deserve it?

Call today.

Operators are standing by.

* * *

The appeal to flattery is also known as apple polishing and greasing the wheel. By inflating your ego, the arguer tries to implant the idea that if you don’t agree, it’s a sign of your stupidity, ignorance, cowardice, or some other unpleasant characteristic. Done too openly and too thick, it’s immediately transparent. Delivered more subtly, it can be difficult to resist.

Its opposite cousin, “pride and ego down,” is a formal technique used in military interrogation. Here’s the relevant section from Army Field Manual FM 2-22.3, the official guide for interrogators.

US Army Definition from FM 2-22.3 

8-45. (Interrogation) The emotional-pride and ego-down approach is based on attacking the source's ego or self-image. The source, in defending his ego, reveals information to justify or rationalize his actions. This information may be valuable in answering collection requirements or may give the [human intelligence] HUMINT collector insight into the viability of other approaches. This approach is effective with sources who have displayed weakness or feelings of inferiority. A real or imaginary deficiency voiced about the source, loyalty to his organization, or any other feature can provide a basis for this technique. 

8-46. The HUMINT collector accuses the source of weakness or implies he is unable to do a certain thing. This type of source is also prone to excuses and rationalizations, often shifting the blame to others. An example of this technique is opening the collection effort with the question, "Why did you surrender so easily when you could have escaped by crossing the nearby ford in the river?" The source is likely to provide a basis for further questions or to reveal significant information if he attempts to explain his surrender in order to vindicate himself. He may give an answer such as, "No one could cross the ford because it is mined." 

8-47. The objective is for the HUMINT collector to use the source's sense of pride by attacking his loyalty, intelligence, abilities, leadership qualities, slovenly appearance, or any other perceived weakness. This will usually goad the source into becoming defensive, and he will try to convince the HUMINT collector he is wrong. In his attempt to redeem his pride and explain his actions, the source may provide pertinent information. Possible targets for the emotional-pride and ego-down approach are the source's— 

o Loyalty. 
o Technical competence. 
o Leadership abilities. 
o Soldierly qualities. 
o Appearance. 

8-48. There is a risk associated with this approach. If the emotional-pride and ego-down approach fails, it is difficult for the HUMINT collector to recover and move to another approach without losing his credibility. Also, there is potential for application of the pride and ego approach to cross the line into humiliating and degrading treatment of the detainee. Supervisors should consider the experience level of their subordinates and determine specifically how the interrogator intends to apply the approach technique before approving the interrogation plan.

Often the two techniques are used together. An interrogator may flatter and build up the ego of a subject, only to turn around and belittle him, which often speeds the extent to which the subject works to justify and defend the behavior you want to learn about.

Of course, none of this would never work on you.

You’re way too smart.

Not to mention good looking.

(Moran.)