Not logged in. Log In
SyllabusLecture NotesHomework

Lecture Notes Unit 2


  1. Informal Fallacies: Introduction
  2. Fallacies of Relevance
  3. Fallacies of Weak Induction
  4. Fallacies of Presupposition
  5. Fallacies of Ambiguity and Grammatical Analogy
  6. Rethinking Fallacy Theory
  7. Cognitive Biases: An Introduction
  8. Some Example Cognitive Biases
  9. Understanding and Combating Biases
  10. Rethinking the Heuristics and Biases Model

Or jump to the unit 1 lecture notes or unit 3 lecture notes.

A. Informal Fallacies: Introduction

A fallacy is an identifiable mistake in reasoning which amounts to something more or other than simply making use of an untrue premise. An argument commits a fallacy when the reasoning it employs makes such a mistake.

Fallacies are typically divided into two categories.

A formal fallacy is the kind of logical mistake made by a deductive argument with an invalid form, or by a inductive argument which can be shown to be weak by the rules of probability theory alone.

panda All pandas are black and white.
All old TV shows are black and white.
Therefore, all pandas are old TV shows.

60% of doctors are men.
60% of men do not have college degrees.
Therefore, 60% of doctors do not have college degrees.

Some formal fallacies have names, e.g., “the fallacy of affirming the consequent” (If P then Q; Q; therefore, P) or “the fallacy of denying the antecedent” (If P then Q; not-P, therefore not-Q). The panda example commits what is called an “undistributed middle” fallacy.

An informal fallacy is a fallacy that cannot simply be classified as a formal fallacy: the kind of mistake it makes can only be recognized by considering the particular content of the argument or the context in which it arises.


How can informal fallacies exist? Do they exist? Aren’t all bad arguments bad either because of their form, or because of bad premises?

We will nonetheless spend the first half of the unit considering alleged examples of informal fallacies.

The subject dates back to Aristotle’s book Sophistical Refutations (Σοφιστικοὶ Ἔλεγχοι; also often referred to by its Latin title, De Sophisticis Elenchis), part of his Organon. It listed 16 kinds of supposed fallacies. Since then, nearly all logic textbooks include lists of informal fallacies, which have been modified and added to over the years.

Aristotle’s work was supposed to be aimed at refuting the “Sophists”, a collection of teachers in Ancient Greece, who taught sophistry, which today connotes the art of being able to argue in a way that can be used to convince anyone of anything, regardless of the actual truth of the matter or the morality of doing so.

Aristotle’s influence was greatest in the Middle Ages, and hence many of these fallacies still bear the Latin names given to them by scholars of that period.

Although this is not always made clear in Introductory courses, the topic of fallacies is controversial and subject to disagreement by experts. Some would argue that it should not even be taught anymore. I do not dismiss this attitude, but my reaction in this course is to invite you to participate in this discussion.

One presupposition I make is that people are rarely wholly irrational. One should not label something as a fallacy too quickly. Even those examples that really are fallacious, the kind of reasoning is probably very similar to reasoning patterns that are or could be rational in a slightly different context.

B. Fallacies of Relevance

As the name implies, these kinds of fallacies involve reaching a conclusion on the basis of reasoning or evidence which is irrelevant to the truth of the conclusion.

1. Arguing Against the Person (Argumentum ad Hominem)

The argument against the person (ad Hominem) fallacy occurs when someone responds to a claim or argument made by someone else by attacking the person making the claim or giving the argument, or his/her motives or circumstances, rather than the claim or argument itself.

This fallacy can take different forms. The abusive form advocates rejecting a position or argument simply by insulting the person advancing it.

zackZack Morris is always getting into trouble at school. He makes Principal Belding’s life a living hell. He treats Kelly Kapowski like a sex object. For these reasons, all of Zack Morris’s arguments for why Bayside should enroll more women than men are invalid and can be completely disregarded.

Here we don’t even know what the arguments are: how can we conclude that they are invalid?

Another form is the “tu quoque” (“you as well”) form which rejects an argument or position on the basis of other actions or positions of the person advancing it in an attempt to show hypocrisy or bad faith.

My mother told me never to have sex before marriage, and gave me several reasons why. But I know for a fact that she had sex before marriage—heck, she had me as a teenager, so who cares what she says?

This response ignores the actual reasons, and even the fact that the mother may know better than most after having learned from her own mistakes. And even if someone still behaves in ways that go against their own beliefs, that doesn’t mean that those beliefs are untrue. Someone might continue to smoke despite knowing full well some very good arguments to the effect that one shouldn’t smoke.

The circumstantial form of the ad Hominem fallacy involves rejecting an argument on the basis of the circumstances that tie the arguer to the conclusion, such as possible personal benefits, motives and effects of its acceptance on their situation.

taxesIgnore everything Donald Trump says about taxes. Of course, he thinks lowering taxes on the rich will lead to growth through greater investment. He would think that. He’s one of the people who will be paying less taxes if his tax proposal is enacted.


Whether or not Trump would himself benefit from the tax cut is not really relevant to the issue of whether or not it will lead to economic growth through increased investment.

As with the other examples we will consider, I invite you to consider how often this kind of reasoning really is fallacious, and whether or not there may be more than meets the eye.

At the very least, there are examples of reasoning similar to ad Hominem reasoning which are not fallacious. For example, if one is evaluating a person rather than an argument for some purpose, details of that person’s character and circumstances can be relevant.

Donald Trump is someone who makes untrue claims disparaging all people of certain faiths and nationalities. He refuses to condemn violence at his rallies directed at his critics. He threatens to sue or even imprison his political rivals. He treats women poorly and even brags about sexual assault. Therefore, Donald Trump does not have the moral character that would make a good President, and you should not vote for him.

Here the insults are not given as a reason not to accept an argument or claim, but are given as a reason not to support a certain person for a certain position.

Something close to tu quoque reasoning might be used to convince someone that they themselves might already be committed to something incompatible with one of their premises, which is a good way of calling that premise into question.

tattooYou claimed that male homosexual acts should be outlawed because they are prohibited by the laws of the book of Leviticus in the Bible. The implicit premise of this argument is that everything prohibited in Leviticus should be outlawed. But Leviticus 19:28 prohibits tattoos, and you clearly don’t think they should be outlawed, since you have several yourself.

(Of course, the tattoos themselves don’t prove anything. The arguer may have changed their mind, or have been unaware of the passage. Nonetheless, it may prompt him/her to rethink the implicit premise.)

The exact relationship between our assessment of people, their credibility, and our evaluation of arguments and positions that they endorse, is I think, a really complicated one. I invite you to think hard about this: is there a more charitable way of making sense of what is going on when people commit this fallacy (or others)?

Consider something that happened to me recently.

Some people I know were circulating a link to a 9/11 “Truther” article, which came out in what appeared to be a Scientific Journal published by a Scientific Institute.
Before I got around to reading it, I came across a meta-critique which pointed out, among other things, that one of the authors, although a scientist, has been involved in the past in circulating wild-sounding theories which were later debunked, and has published a number of other “unusual” articles, including one claiming that Jesus recently visited North America. I decided after this that the article was not worth my time.

Did I commit an ad Hominem fallacy?

2. Appeal to the People (Argumentum ad Populum)

The appeal to the people (ad Populum) fallacy occurs when someone advocates for a conclusion based on its popularity, the appeal of being more similar to others who believe or accept that conclusion, or by appealing to the emotional pull of being accepted, recognized or valued by others.

This can take a form that is almost the opposite of an abusive ad Hominem: one praises (or alludes to pre-existing admiration) of a certain figure or group who advocates some action or believes something, and suggests that this alone provides a reason to do likewise.

likemikeMichael Jordan is the greatest basketball player in history. You want to be like Mike, don’t you? You should buy some Gatorade today!

(As with so many of these examples, the evaluation is complicated. If the argument were specifically directed towards other atheletes, and the argument is interpreted as implicitly assuming that Gatorade is responsible for Jordan’s athletic prowess, it is not necessarily fallacious. However, most likely, Gatorade meant to appeal to everyone who admired Jordan, not just fellow athletes. Here the appeal is simply one of being similar to someone you admire, fitting in, and so on.)

The fallacy can also take the form of a bandwagon argument: everyone else accepts the conclusion, so you should too!

Free will must exist. Over 90% of people who were polled claimed to believe in free will, and that many people can’t be wrong.

More subtle forms can appeal to people’s vanity, or even their desire to be seen as distinguished or exclusive (i.e., their snobbery).

didg Lots of people learn to play the piano or guitar. But only a select few will ever learn to play, much less master, a Didgeridoo. You should become one of the few!

All the serious Quidditch players fly a Firebolt broom. You should get one. If you get anything else, you clearly do not take the game very seriously.

As with the others, sometimes there can be borderline cases, or situations where what seems similar to this kind of fallacious reasoning isn’t really fallacious.

corolla The Toyota Corolla is the most widely sold car in America, and Corolla owners often come back to buy again. It must be reliable and cost-efficient. You should consider a Corolla.

If you accept the tenets of our Church, you will be part of a community. That community fosters a sense of belonging and inclusion. Everyone in our congregation will do what he or she can to help you, and we would hope you would do the same for us.

Of the eye witnesses to the crime, 11 claim the perpetrator had blonde hair. Only one thinks the perpetrator had brown hair. So the person we’re looking for is probably blonde.

97% of climate scientists believe that human-caused climate change is real. Hence, it probably is.

3. Appeal to Force (Argumentum ad Baculum)

The appeal to force (ad Baculum) fallacy occurs when someone suggests, implicitly or explicitly, that a certain conclusion should be accepted because failing to accept it would lead to harm.

“Baculum” in Latin means stick or cudgel. The most blatant form of this fallacy occurs when a direct physical threat is made.

mariahMariah Dillard is clearly the candidate who deserves your support for City Council, and you should make a donation to her campaign. Without her protection, something bad could happen to your little shop. You wouldn’t want that, would you?

The harm could be physical, psychological, or financial, or something else instead.

Sometimes it is thought that this kind of fallacy points to a difference in two kind of reasons for belief: pragmatic reasons versus logical (or epistemological) reasons. A good argument provides the latter sort, but I might have reasons of the former sort which have nothing to do with the truth or falsity of the conclusion. Often, one could describe a threat as providing a reason to act as if I believe something, even if I don’t really have a reason to believe it.

In practice, it is hard to find too many examples of people who take seriously the logical force of an ad Baculum argument: the apparent reasoning is more of a pretense. Arguably, this is a possible exception:

If you don’t believe in God, you risk the possibility of burning in Hell forever. You don’t want to take that risk. You should believe in God.

Related to the ad Baculum, but on the other side of the spectrum, is wishful thinking. (Hurley does not discuss this.)

The wishful thinking fallacy occurs when a conclusion is accepted as true either because of the benefits of believing it or because the circumstances that would make the conclusion true are desirable.

The local police cannot be guilty of racial profiling. If they are guilty of that, then we cannot trust them, and the security of our entire community would be in jeopardy, and we would be forced to live in fear.

Our next category is similar.

4. Appeal to Pity (Argumentum ad Misericordiam)

The appeal to pity (ad Misericordiam) fallacy occurs when pity is given as a reason to accept or not accept a conclusion the acceptance or non-acceptance of which might lead to bad consequences for those pitied.

deadline I must qualify for being given an extension on my homework. If points are taken off for lateness, I might fail the course, and without this course, I will not graduate. I am a good person, and I have worked hard to graduate.

Of course, this is not to say that pity or compassion themselves are a kind of fallacy. It’s fine for them to provide motives for action, or for treating people a certain way. However, that is different from not reaching a certain conclusion because it would be inconvenient for someone if you had a certain belief.

5. Straw Figure Fallacy

The straw figure fallacy (AKA “straw man fallacy”) occurs when the argument given by someone else is distorted, over-simplified, or interpreted uncharitably, in order to make it easier to refute or dismiss.

scarecrow Pro-life activists have claimed that women should not be allowed to have abortions. Clearly, they believe it should always be up to the government to decide what women do with their bodies. But women should be in control of their own bodies, and so the pro-life position is misguided.

Opponents of capital punishment believe that one should never kill someone, even if they’re guilty of a crime, or about to commit a crime. But obviously, it is OK to kill someone in self-defense, so their objection to the death penalty makes no sense.

Gary saw me wearing sunglasses to the casino the other day, and saw me wearing them to the racetrack today. He then told my wife that he thinks I am a gambler. He must think everyone who wears sunglasses gambles – what a moron!

6. Red Herrings

The red herring fallacy occurs when an unrelated or different topic is raised to deflect attention away from the argument or issue under discussion.

The label comes from the use of bags of herrings (fish) dragged on the ground to lead dogs tracking by scent astray.

herring Anderson Cooper: You called what you said locker room banter. You described kissing women without consent, grabbing their genitals. That is sexual assault. You bragged that you have sexually assaulted women. Do you understand that?

Donald Trump: No, I didn’t say that at all. I don’t think you understood what was — this was locker room talk. I’m not proud of it. I apologize to my family. I apologize to the American people. Certainly I’m not proud of it. But this is locker room talk.

You know, when we have a world where you have ISIS chopping off heads, where you have — and, frankly, drowning people in steel cages, where you have wars and horrible, horrible sights all over, where you have so many bad things happening, this is like medieval times. We haven’t seen anything like this, the carnage all over the world.

It isn’t exactly clear, however, that this is really a mistake in reasoning. The only mistake would be if someone concluded that the original question had been answered, or issue settled, or something along those lines.

7. Fallacy of Accident (Argumentum ad Dictum Secundum Quid)

The fallacy of accident occurs when a general claim or rule is applied to a situation or circumstance to which the rule was not meant apply, or is an exception.

surgPeople who stab and cut people open intentionally are criminals. Surgeons are people who stab and cut people open intentionally. Thus, surgeons are criminals.

You shouldn’t wear white after labor day. You’d better lose those casts and that hospital gown!

8. Missing the Point (Ignoratio Elenchi)

The fallacy of missing the point occurs when certain premises are put forth, and there is a natural conclusion to draw from them, but a different conclusion, one not fully supported by the premises, is drawn instead.

ivy The Ivy League Universities have for too long privileged children of alumni in their admissions decisions. This is unfair to those qualified applicants without family connections, and artificially keeps these schools’ student bodies insular and homogeneous. Therefore, they should ban all children of alumni from even applying.

(A more appropriate concusion would have been: their admissions offices should take steps to prevent children of alumni from having an unfair advantage.)

C. Fallacies of Weak Induction

In this category we include inductive styles of reasoning that, although similar to strong inductive arguments in certain ways, have features that make them weak.

The list of fallacies largely coincide with the common categories of inductive arguments discussed in our first unit.

1. Appeal to Unqualified Authority (Argumentum ad Verecundiam)

The appeal to unqualified authority occurs when an argument’s premises involve nothing more than the authority or testimony of an individual who is not a reliable or credible source of information with regard to the conclusion.

My dentist claims that the US government is hiding evidence of several contacts they’ve had with extra-terrestrials. Hence, aliens must really have visited Earth.


Not all arguments relying on testimony or evidence are weak. Some things to watch for:

Labeling an argument as committing this fallacy may itself seem like an ad Hominem fallacy, but note that rejecting the argument relying on reliability is not the same as rejecting the conclusion of the argument.

Fallacious arguments can have true conclusions, which is important for our next topic.

2. Appeal to Ignorance (Argumentum ad Ignorantiam)

The appeal to ignorance fallacy is committed when an argument concludes that something must be true just because it hasn’t been established as false, or concludes that something is false just because it hasn’t been shown true.

People have been trying to prove that extra-terrestrial life exists for centuries, but no one ever has. Therefore, there is no life anywhere else in the universe.

The fallacy is only committed when there isn’t reason to think to think that the evidence for something would have been found by the investigation methods if the claim in question were true.

rhinoI went into my bathroom, looked everywhere, and couldn’t find any evidence that there was a Rhinoceros in it. Therefore, there was no Rhinoceros in my bathroom.

3. Hasty Generalization (Converse Accident)

The hasty generalization fallacy occurs when a (hard or soft) generalization is concluded on the basis of a sample of instances of the generalization which is either too small or too unrepresentative to support the generalization.

gummi I ate three Haribo gummi bears I found under the floorboards in my dorm room closet. All three tasted terrible. Probably, all Haribo gummi bears taste terrible.

We asked 15 people at the Amish settlement how many of them would consider buying the next iPhone, and they all said no. It seems clear that almost no one will buy the new iPhone.

Not all arguments for generalizations are weak, and it isn’t just a matter of the sample size. It has a lot to do with whether the sample was chosen in a non-random way, and whether or not other distorting factors are ruled out.

We’ll talk more about related issues when we discuss scientific reasoning in our final unit.

4. False Cause

The false cause fallacy is committed when the reasoning behind an argument involves the presupposition that something is the cause of something, when there is no good reason to think so.

celt NBA teams that have the highest paid players tend to win more than NBA teams with lower paid players. If the Celtics want to win next year, they should just raise their current players’ pay across the board.

I bought a house last year. Right after I signed the mortgage, interest rates went down. I’m sure if I take out a car loan next week, interest rates will go down again.

This roulette wheel has landed on black the last 4 times. Next time it’s sure to land on red.

Some reasons that lead people to think there is a causal link when there isn’t:


5. Slippery Slope

(The slippery slope fallacy can also be considered a kind of a false cause fallacy.)

The slippery slope fallacy occurs when it is claimed that one thing will lead to a series or chain of other events when there is not strong evidence to suppose that the series would in fact take place.

slipslope If gay people are allowed to marry, then polyamorous marriage and in-family marriages cannot be far behind. After that, people will able to marry animals or even objects. The entire institution of marriage will lose all meaning. We had better not legalize gay marriage.

Of course, some slopes are slippery, and some chain reactions do happen, so not all reasoning involving them is fallacious.

6. Weak Analogy

The fallacy of weak analogy occurs when a conclusion is reached based on an analogy or similarity between two things, but the analogy or similarity is not close enough to support the conclusion.

nexusThe Samsung Galaxy and Google Nexus cell phones are very similar. Both come with rear and front facing cameras, 5g and wifi internet connections, high resolution displays and an Android-based operating system. Therefore, they probably use the same default background image as well.

The basic form of an analogy is that X and Y share features A, B, C, etc. Hence, they probably share feature D as well. This is only likely to be strong either if there is a connection between features A, B, C and D, or if there is a common explanation for why they share those features which would also explain why they would likely share the other.

D. Fallacies of Presupposition

These fallacies have in common that they involve reasoning or argumentation where an unwarranted or unjustified assumption or presumption is being made.

1. False Dichotomy

The false dichotomy fallacy is committed when the premises of the argument, explicitly or implicitly, involve the assumption of a stark either/or statement where it is likely that neither alternative is true, or a third alternative is possible.


This fallacy is sometimes also called the “false dilemma fallacy”, “false disjunction fallacy”, “false bifurcation fallacy” or “either/or fallacy”.

It is not altogether clear that this should even count as a fallacy under our original definition of that term, since it seems merely to involve using a false premise. However, such dichotomized thinking is a common and recurrent problem in reasoning in general. This fallacy is also exploited by advertisers or fast-talkers to lead people to conclusions they might not otherwise accept. Therefore, it deserves its own name.

We must either declare war on ISIS and fully deploy ground troops, or do nothing and let ISIS take over the Middle East. Clearly, we cannot simply do nothing and let ISIS continue its expansion, and so we should declare war now.

Sometimes the unjustified either/or premise is left implicit.

You clearly have no interest at all in helping your fellow human beings. After all, you did not donate any money to the Salvation Army this month.

2. Complex Question Fallacy

The complex question fallacy is committed when a reaction to a question (or other query or prompt) which involves multiple parts or presuppositions is misinterpreted as reacting to or accepting more (or different) parts or presuppositions in the query than was intended.

This fallacy can be avoided by making presuppositions explicit and asking each part of a question separately.

wiki I asked you whether it was Wikipedia that you copied your essay from. You said “no”, which means you must have copied it from somewhere else!

[The fallacy would not be possible had I asked instead: (1) Did you copy your essay? (2) If so, did you copy it from Wikipedia or somewhere else?]

3. Suppressed Evidence Fallacy

We saw in our first unit 1 that (otherwise) cogent inductive arguments can be defeated: that is, they can be made worse by the addition of relevant premises.

The suppressed evidence fallacy is committed when available additional information or premises that would weaken the force of an argument is suppressed, left out or ignored.

burn Musa should be named Library Employee of the Month. When the fire broke out in the historical archive room, Musa was the first on the scene. He called the Fire Department right away, and even before they arrived, was able to save some of our most valuable documents. He is a hero.
[Suppressed information: Musa started the fire when he was smoking in the archive room against policy.]

Surgical procedures are usually tax deductible. This deduction claimed here represents the cost of a surgical procedure, so it is allowed. [Suppressed information: it was cosmetic surgery for your cat.]

This falls under fallacies of presupposition since in evaluating an inductive argument we typically take it as a presupposition that there is no additional relevant information.

This fallacy is quite common, and quite often used to deceive and manipulate, in commercial contexts, political contexts, and beyond.

As a lawyer, Hillary Clinton helped get a man she herself later admitted was probably guilty of child rape go free when she represented him in his trial in 1975. She clearly has no concern for the welfare of children and the need to hold pedophiles accountable! [Suppressed evidence: Clinton was assigned the case by a judge and her request not to handle the case was denied. As the legal representative of a client, she is bound to serve that client’s interests to the best of her ability.]

4. Begging the Question (Petitio Principii)

An argument begs the question when the reasoning it uses is circular, or if it uses or presupposes a premise which one has little rational reason to accept without already accepting the conclusion.


(This phrase is now commonly used to mean something else; don’t be misled by this.) The adjectival version is “question-begging”.

There are two common forms.

In one, a crucial premise is left out, and making that premise explicit makes it obvious that it is unlikely to be accepted by those who doubt the truth of the conclusion.

In the other, there is a chain of sub-arguments forming a loop. (If you drew a argument map, it would actually go in a circle.)

The inheritance tax should be eliminated, because all theft is and should be illegal.
[The implicit premise here is that inheritance tax is a form of theft, which is unlikely to be accepted by a proponent of it.]

Clearly God does not exist. If an all-powerful, all-loving being existed, we would see the influence of its power and love throughout all of nature. Sure, some theists believe God’s power and love show themselves in the natural world. But clearly the wonders of nature are all the result of natural evolutionary processes which God had no part in: after all, there is good reason to think there is no God!


What, really, is the difference between good reasoning and begging the question? What makes begging the question fallacious?

At first, it seems obvious that circular reasoning is defective. But consider other natural ways of defining it: an argument begs the question when the truth of its premises contains, or requires, the truth of the conclusion or it would be impossible for the premises to be true without the conclusion also being true.

But that is precisely the definition of a valid (deductive) argument!

Consider this rather egregious instance of begging the question:

The US has the most demolition derbies of any country in the world. Therefore, the US has the most demolition derbies of any country in the world.

The premise is true. (I think.) The form …

Therefore, P

… is truth-preserving: it’s not possible for the premise to be true while the conclusion is false. Hence the argument is valid. It’s both valid and factually correct, so the argument is sound!


Is the problem with begging the question epistemological?

Tempting suggestion: An argument or process of reasoning can only produce knowledge if the premises are already known. In circular reasoning, it would be impossible to know the premises without already knowing the conclusion.

Worry: Wouldn’t this lead to an infinite regress? How could someone ever know anything?


Is the problem with begging the question rhetorical or dialogical?

Tempting suggestion: In an actual dialogue or interchange between people, circular reasoning can never be convincing: someone will be convinced of the conclusion only given prior acceptance of the premises.

Worry: Isn’t circular reasoning fallacious even outside of the context of a dialogue or conversation?

What else could explain the difference between valid reasoning and question-begging reasoning?

E. Fallacies of Ambiguity and Grammatical Analogy

A reminder from our last unit: a piece of language is ambiguous when it has more than one possible meaning. Fallacies of ambiguity are those that arise when ambiguities in an argument are not eliminated.

1. Equivocation

The equivocation fallacy occurs when the reasoning behind an argument is unsound because the same ambiguous word or phrase is used with two different meanings.


A word whose meaning is unclear in a certain context due to ambiguity is said to be equivocal. The opposite of equivocal is univocal. The verb form is “to equivocate”.

I gave an example like this in unit 1.

The Parks Department installed a baseball diamond. Diamonds are a kind of jewel. Hence, the Parks Department installed a jewel.

That argument is kind of silly. It’s hard to imagine anyone making such an obvious mistake. However, fallacies of this sort do occur. Consider:

Many scientists advance the theory of evolution. But people only propose theories when they don’t know the truth and are just trying to speculate about what might be true. It follows that the theory of evolution is mere speculation, and not something scientists have any solid evidence to back up.


This argument equivocates on the word “theory”.

It‘s true that in ordinary conversations we sometimes speak of untested speculations as “theories”. But in science, “theories” are merely distinguished from data: theories are whatever is used to explain the data as opposed to the data themselves. These are still called theories no matter how much confirmation the data provide for them, or whether or not any rival theories are still being sought. In that sense, there is also a theory of gravity.

Sometimes a phrase made up of multiple words has different interpretations.

bbal My toddler is just learning to play basketball, and is really excited about it. He’s a good boy: he does all his chores and is always kind and respectful. To sum up: he plays basketball, and he’s good. Therefore, he’s a good basketball player.

[If “good basketball player” is interpreted to mean someone good at playing basketball in the conclusion rather than someone who both happens to be a good person and happens to play basketball, then we have an equivocation fallacy.]

2. Amphiboly

The amphiboly fallacy occurs when the reasoning behind an argument is unsound because a statement which has an ambiguous grammatical form is misinterpreted.

Here the problem is not with any individual word or phrase, but the way in which they are combined in the sentence allows for different readings.

crash Jones plead guilty to reckless driving in the courthouse yesterday. Anyone who drives recklessly inside a building should have their license taken away immediately! Someone take away Jones's license!

Officer Flynn said she saw an act of sexual intercourse taking place between two parked cars in the parking lot. Apparently, Fords and Chryslers have learned to copulate.

All the girls at the teenage pregnancy support group said they had unprotected sex with someone. Well, that someone sure has a lot of explaining to do!
[The grammar of the sentence allows us to interpret this as someone in particular where what is meant is each had sex with someone or other. This is arguably on the border between equivocation and amphiboly.]

One of the reasons logicians like to invent special symbolic logical languages is that these languages are structured in a way to eliminate grammatical ambiguity.

Sometimes if one is limited to everyday language, it can be hard even to describe the ambiguity in a non-ambiguous way.

What do you think of the following argument? (It is adapted from the writings of 18th century philosopher George Berkeley.)

berk It is impossible for anything to exist independently from a mind. If something cannot be conceived of without contradiction, then it is impossible. It is impossible to conceive of something existing independently from any mind: that is, independently of being conceived of or perceived. Just try to imagine a tree that exists independently of any conceivers or perceivers: you cannot. If you think of a tree out on the quad when no one is there, there is still a mind conceiving of it, or thinking about it: yours!

Philosophers disagree about what, if anything, is wrong with this argument. But I think it commits an amphiboly fallacy.

I find it difficult to word in ordinary English, but I’ll try.

It is the difference between:

  1. It is impossible that: there is something, x, such that: I conceive of x and no one conceives of x. (True)
    In symbolic logic: ¬◇(∃x)(Ckx ∧ ¬(∃y)Cyx)


  2. It is impossible that: I conceive that: there is something x such that: no one conceives of x. (False)
    In symbolic logic: ¬◇Ck⌜(∃x)¬(∃y)Cyx⌝

I don’t expect you to understand the logical formulas—just a sales pitch for our symbolic logic courses if you find this interesting.

3. Fallacies of Grammatical Analogy: The Core Idea


We have two more fallacies to discuss, but to discuss them we first need to discuss a particular kind of confusion or ambiguity.

When we talk about complex wholes of parts, or when we talk about collections, there's a difference between a generalization which makes a claim about all or most of the members/parts of the collection or whole, and a statement about the collection or whole itself.

Unfortunately the grammar of these kinds of statements is often very similar or analogous.

deer 1. Deer have four legs.
2. Deer live in every state of the US.

The first sentence (1.) is a generalization about the individual deer. They are what have four legs. The entire collection of deer has a lot more than that. It would be clearer if a word like “all” or “every” or “most” were included.

The second (2.) is about the collection. No one deer lives in all 50 states.

A statement makes a distributive predication if it makes a claim about all or some of the individual members or parts of a group or whole.

A statement makes a collective predication if it makes a claim about a group or whole itself.

Indeed, the analogy can be so close that there can appear to be contradictions!

infinity 1. The even numbers are finite.
2. The even numbers are infinite.

Strange as it sounds, these are both true, when properly interpreted. Each individual even number is finite. However, there are infinitely many even numbers, and so the collection of them is infinitely large.

In other words, if we interpret 1 as distributive, but 2 as collective, both are true. (Not otherwise.)

Our last two fallacies involve confusions based on this kind of grammatical analogy.

4. Fallacy of Composition

The fallacy of composition occurs when an attribute or characteristic of the parts of a whole or the members of a group is wrongly thought to apply to the whole or group collectively.

tigers Tigers eat a lot more meat per day than humans. Hence, much more of the meat produced around the globe annually goes to feeding tigers than to feeding humans.

The weight limit for this elevator is 1000 pounds. Thankfully, no one here weighs nearly that much. So it should be safe for all 30 of us to ride it at once.

Every prime number is finite. Hence, there are only finitely many prime numbers.


This fallacy is not the same as hasty generalization. The conclusions of hasty generalizations involve distributive predications where there isn’t adequate reason to think the attribute is distributed widely enough to make the generalization true.

On the other hand, the conclusion of a composition fallacy is a collective predication, and may be false even if the attribute in question is distributed to absolutely everything in the collection, as in our examples.

There are certain attributes or characteristics that do automatically apply collectively if they apply distributively. For these, the inference is not fallacious. (One can take this fact to be an implicit premise.)

Each one of my baseball cards is in my basement. Therefore, my baseball card collection is in my basement.

5. Fallacy of Division

The fallacy of division occurs when an attribute or characteristic of a whole or of a group is wrongly thought to apply to the parts of the whole or members of the group.

This can be thought of as the reverse of composition.

zach The person the American citizens vote for on November 8th will become the next President. Cordelia is an American citizen. Hence, the person she votes for on November 8th will become the next President.

People living in cities make more money in an average year than people living in rural areas. Therefore, my friend George, who is a barista in New York City, must make more money in an average year than Zach Galifianakis, who owns a farm in rural North Carolina.

This fallacy should not be confused with the fallacy of accident, for the same reasons the composition fallacy should not be confused with hasty generalization.

F. Rethinking Fallacy Theory

There is currently a debate about whether or not traditional fallacy theory (going back to Aristotle’s work) should be continued to be taught in logic or critical thinking courses, and if so, whether or not it needs to change or be presented very differently.

1. Is fallacy theory necessary? Is it redundant?


Earlier, we saw that it’s not even clear what an informal fallacy is: isn’t learning the difference between valid and invalid forms (for deductive logic), the proper use of probability theory (for inductive logic), and techniques for criticizing and evaluating individual premises all that is needed?

In response to this: if these represent common patterns of bad reasoning, knowing about them might help someone recognize them in their own inferences or others’ arguments.

But how common are these really?

People who stab and cut people open intentionally are criminals. Surgeons are people who stab and cut people open intentionally. Thus, surgeons are criminals.

The Parks Department installed a baseball diamond. Diamonds are a kind of jewel. Hence, the Parks Department installed a jewel.

[Does anyone actually make mistakes like this?]

2. Are the things identified in fallacies of relevance always really completely irrelevant?

Does it depend on the whole reasoning context, and what is at stake there? Might it be involved with assessing things at a different level?

zackZack Morris is always getting into trouble at school. He makes Principal Belding’s life a living hell. He treats Kelly Kapowski like a sex object. For these reasons, all of Zack Morris’s arguments for why Bayside should enroll more women than men are invalid and can be completely disregarded.

I made up this example to illustrate abusive ad Hominem reasoning when I first taught logic in 1997. At the time, I thought it was a clear-cut fallacy. Now I’m not so sure. If Zack is a chauvinist who treats women badly, it is not easy to imagine him giving spurious half-baked arguments that might result in his being surrounded by more women he can harass. What are the chances he’d consider the question objectively and seriously?


But isn’t that still ad Hominem circumstantial then? How can we evaluate his arguments without knowing what they are? True, only a non-truth-preserving form can make them invalid; only false premises can make them factually incorrect, etc. But if you consider the context in which the above passage might be said, maybe the practical conclusion is that Zack and others like him are prone to giving arguments like that, which may well be true.

Consider self-criticism like this:

My math professor asked us to prove the Lindenbaum Extension Lemma as homework before class today. I finished my proof at 4am last night. But I bet it is no good: I was so tired I couldn’t think straight!

Only a mistake in the logic or math behind the lemma could make your proof no good. But if you know from experience that you (and others) make mistakes in logic or math when you’re tired, this might still be a decent inductive argument.

Sometimes we’re reasoning about reasoning. Sometimes it’s splitting hairs to distinguish between the conclusion that something is false, and that it is not rational, which can affect the evaluation of whether or not these kinds of arguments are fallacious.

iphone People believe in God because they psychologically long for a father figure. Therefore, their belief is probably false.

[As is, this is fallacious, but if the conclusion were instead “their belief is irrational”, or “their belief is not caused by factors requiring its truth”, it wouldn’t obviously be. Maybe in the context this is given, someone was arguing that God’s actions are the cause of belief, and this undermines that argument, etc. Out of context, labeling as this as fallacious does seem a bit quick.]

Apple is the only tech company hiring in my area, but they don’t like to hire people who prefer Android to iOS. I should get myself to like iOS better.
[Is this really an appeal to force, or should the conclusion be interpreted differently?]

3. Is there any point to labeling weak inductive arguments as fallacies?

Some appeals to authority are cogent. Some analogies are strong. Isn’t it enough just to discuss what makes the good examples good? Wouldn’t that by itself be enough to make it obvious when those good features are not found?

Do we run the risk, for example, of making people too suspicious of trusting the words of others by teaching the “appeal to unqualified authority” fallacy?

bono U2 lead singer Bono has pointed out that many Third World countries are suffering greatly even due to the need to make minimal interest payments on the vast debts they owe to countries like the US and Great Britain. Bono has advocated that industrialized nations consider wholescale debt relief and cancellation for these countries. We should all support Bono’s cause.

[Bono’s “first job” is not a Third World economist. But does this mean he can’t have any reliable knowledge on this subject?]

Of course our reaction to such an argument needn’t be either immediate acceptance, or immediate rejection. In the case of inductive arguments especially, it must always be remembered that strength comes in degrees.

4. Does thinking in terms of fallacies encourage us to see mistakes in things that aren’t even meant as arguments?

strong With Hillary Clinton as our President, we’ll be strong and united! We’ll be stronger together, just like her motto says. Vote Clinton/Kaine on November 8th!

[Viewed in the light of fallacy theory, this seems either to beg the question, or represents at best some kind of ad populum appeal to emotions. But is it even meant to be an argument? Maybe it’s just meant as inspiration, or to provide emotional motivation or support.]

5. Does focusing on fallacious reasoning make us too uncharitable? Too quick to dismiss what might be right about an argument or in a train of thought?

Many of the examples given when teaching fallacies are over-simplified artificial examples.

These can distort how we approach arguments similar to them but more sophisticated.

Every law is created by a law-maker or law-giver. The laws of nature are laws, and so they must have had a law-maker or law-giver, who could only be God. Therefore, God exists.


I’ve seen this or similar examples used to illustrate the equivocation fallacy in more than one textbook. The idea is that the word “law” means something different when one thinks of laws passed by a legislative branch of government and when one thinks of laws of nature.

But this is too simplistic a reading; too uncharitable. It is itself a straw figure fallacy. If someone is tempted to give an argument along these lines, they know full well there are different kinds of laws. They are making an analogy, or suggesting that nothing like the laws of nature could come into being by accident, but only through a choice made by an intelligent being. They may be wrong about that, the analogy may not be strong enough, etc. There may even be some amount of begging the question here, but I hardly think this is a matter of equivocation.

6. How can the teaching of fallacies be improved? What information would it be useful to know?


Some thoughts:

I do think studying fallacies can be valuable to an extent. If nothing else, one could use them as evidence of misunderstanding when interpreting. Compare:

“That’s ad Hominem, you dolt!”

“I’m sorry. Maybe I didn’t understand what you meant. Surely, you aren’t rejecting this argument merely because you dislike the person giving it. Can you clarify?”

Have you found studying fallacies worthwhile?

G. Cognitive Biases: An Introduction

1. Relationship to Fallacy Theory


Studies something very similar to fallacies; indeed, things like “the gambler’s fallacy”, “wishful thinking”, “hasty generalization/stereotyping”, or “reliance on unqualified authority” are sometimes classified as cognitive biases as well.

From a different tradition:

2. Cognitive Biases Defined

A cognitive bias is an automatic, unconscious, influence on cognition or thinking which can regularly lead to errors or deviations from ideal rationality.


Sometimes these are called implicit biases, but they are implicit in a stronger sense than what is involved with “implicit premises”; those susceptible to them are typically not aware of them at all.

We are not here talking about conscious or explicit biases, which can also occur.

Outside of academia, the word strongly connotes a prejudice against certain groups of people: racial bias, gender bias, ageism, etc. The way we use this word includes these biases, but is broader.

3. Heuristics

There are different theories about how to explain or understand the causes of cognitive biases. Perhaps the most influential explanation involves heuristics.

A heuristic is a cognitive strategy, or shortcut, which is used to make information processing or another reasoning task simpler or more efficient to complete.


Like other shortcuts, heuristics can lead us to miss important things, or expose us to dangers, like oversimplification or distortion of the real issues.

Why do we make use of heuristics?

Consider especially the situations people (and the animals we evolved from) were in much of the time: decisions were often life-or-death.

It is safer to conclude that a plant is poisonous if it makes you sick once than it is to carry out a controlled experiment.

4. Fast and Slow Thinking


This way of thinking of comes from Daniel Kahneman, Thinking Fast and Slow; Kahneman is one of the pioneers of the study of cognitive biases, and past winner of the Nobel Prize in economics.

System 1 or fast thinking employs quick, immediate reactions based on processes that are mostly unconscious and difficult to verbally explain. This includes our “gut reactions” and “intuitions”.

System 2 or slow thinking involves conscious, deliberate consideration of the factors and possibilities involved with an issue, and actively considering each separately. This could be called “working through” the question or problem.

I suspect this way of characterizing things is itself an oversimplification or heuristic. (I’m not even sure Kahneman would deny this.) Nevertheless, let us consider this further.


Look at the girl on the right. She’s clearly upset. Your knowledge that she is comes from System 1 thinking. You likely did not go through an “argument” in your mind such as: Her face is scrunched. Her arms are crossed. Most people with have both scrunched faces and crossed arms are upset. Therefore, she is upset.

Perhaps you would offer that argument as an explanation if you were asked about it, but that would clearly be a rationalization of something you already believed automatically and without argument.

17 × 24 − 10 = ?

Contrast that with the math problem above. Unless you’re a math savant, you cannot answer this with System 1 thinking. (You would, however, e.g., with “2 + 2 = ?”.) You must carefully work at it, perhaps by doing long multiplication on paper. (Or give up and use a calculator.)


Assuming you went through the calculation yourself, that calculation is the reason you believe that the answer is 398. It’s not some kind of rationalization.

Both systems are required for most tasks of any complexity. Becoming good at chess, for example, requires both.

Kahneman believes that cognitive biases result from sub-optimal interplay between these systems, especially over-reliance on System 1 and neglect of System 2.

H. Some Example Cognitive Biases

Lists of cognitive biases often have hundreds of items. It is controversial whether or not there are really that many; perhaps some of them are really different results of the same underlying mechanism as others. I’m choosing a group I find to be representative of the whole, and interesting, but my own selections may even be biased.

1. Anchoring effect

The anchoring effect is the tendency to rely too much on, or weigh too heavily, the first piece of information suggested as relevant to a decision or judgment, especially when it takes the form of a numeric value.

gandhi Two groups of people were asked to estimate the age Mahatma Gandhi was when he died. The first group was first asked if it was before or after age 9. The second group asked whether it was before or after age 140. (Both of these are easily answered and shouldn’t have influenced anything.)

The first group guessed 50 on average. The second group guessed 67 on average. (Strack and Mussweiller 1997)

In your homework set at the beginning of this unit, I performed a similar experiment. I asked some of you to estimate Gandhi’s age after being told it was either before or after 40, and I asked some of you to estimate Gandhi’s age after being told it was either before or after 90. Those in the first group gave an average of 63; those in the second group gave an average answer of 82.

Metaphorically, the first piece of information is an “anchor” on one’s reasoning, which people find it difficult to move too far away from.

In the Gandhi case, a charitable explanation might be that the initial question implied that there was a dispute about his age at his death, which is misintepreted as providing information about others’ beliefs which cannot be dismissed.

However, the effect has been found even in situations where no such charitable interpretation is possible.

Participants were asked first to spin a roulette wheel, which (unbeknownst to them) always landed on either 10 or 65. They were then asked to estimate the percentage of African nations in the United Nations.

Those who spun 10 gave answers 25% lower on average than those who spun 65. (Tversky and Kahneman 1974)

Practical consequences


Despite the advice of some, when negotiating a salary or house payment, those making the first offer have an advantage.

Similarly, when making a purchase, consider what you might have paid for it had you known nothing about the initial asking price. Merchants will mark things up then offer “big discounts” on the sticker price; don’t let the sticker price be an anchor for you.

2. Availability bias

Availability bias is the tendency to estimate the probability of something or the frequency of an event based on how easy it is to recall or remember instances.

kParticipants were asked whether a random word taken from English was more likely to have a “k” as its first letter, or a “k” as its third letter. Participants answered that it was more likely that they would have “k” as a first letter, after coming up with more examples of such. In fact, there are three times as many words with “k” as the third letter, although it takes more cognitive effort to think of examples. (Tversky and Kahneman 1973)

(When repeated in this class, 60% estimated that words with “k” as their first letter are more common. It has been slightly higher in past years.)

Studies show that people routinely overestimate the likelihood of unusual events which are commonly and routinely covered in media reports as opposed to those that are not (homicides, shark attacks, lightning strikes vs. common diseases, workplace accidents). (E.g., Brinol, Petty and Tormala 2006.)

Practical consequences

This is especially dangerous when an affect heuristic, one that involves appeal to current emotions, is involved. Things that provoke emotions, or are vivid, or sensationalistic, stick better in our memories. We can also more easily remember things right after we are reminded of examples of them. This bias shows that this can lead us to overestimate and therefore, put too much emphasis on such things.

riotSome participants on an airplane were asked how much they would pay for insurance covering “death due to terrorist attack”. Others were asked about insurance covering “death for any reason”. The first group were willing to pay more on average, despite that the insurance they would be getting is much less valuable. (Kahneman 2006)

When I taught this course three years ago, right before the election, I asked half the students how much they’d pay for $10,000 coverage for property loss due to post-election rioting and looting. I asked the other half about the same amount of coverage for property loss for any reason. Those in the first group were willing to pay about 3× more on average than those in the second group, despite that the first kind of insurance is less valuable.

This year, when I asked half the course about insurance covering up to $10,000 of losses from rioting and vandalism from the 2020 election, and half the course about insurance for up to $10,000 of any kind of property loss. The first group gave an answer of about $200 higher on average.

This is perhaps in part why some politicians like to scare us with talk of terrorism, financial collapse, and so on. They are influencing what kinds of images are most easily “available” to our minds when considering outcomes.

It also helps explain why people tend to find anecdotal evidence more compelling than statistical evidence.

3. Confirmation bias

Confirmation bias (also called myside bias) is the tendency to accept or weigh more heavily evidence which confirms one’s prior beliefs and/or dismiss or weigh less heavily evidence against those beliefs.

This bias is likely familiar to anyone who has ever had a political or religious discussion with anyone for any reason. It is related to wishful thinking and the notion of “self-fulfilling prophecy”.

kerry During the 2004 election, participants were shown apparently contradictory statements, and were told they were made by either (Republican candidate) George W. Bush, (Democratic candidate) John Kerry, or by another, non-political, figure. They were then asked if the statements were reasonable. Democratically-learning participants were much more likely to judge them as reasonable if they were told they were made by Kerry, much less likely to judge them as reasonable if told they came from Bush. The opposite reaction was observed for Republican-leaning participants. (Weston, Blagov, Harenski, Kilts, and Harmann 2006)

Participants were asked to consider and evaluate a number of research projects involving the efficacy of the death penalty/capital punishment. Those who had been in favor of the death penalty before the study were much more likely to criticize those studies which showed little efficacy, and endorse studies showing large efficacy. Those opposed to the death penalty were much more likely to do the opposite. (Baron 2000)

In this course, I first asked you to tell me your prior attitude about imposing stricter gun control laws, and then asked you how likely you thought it was that a certain controversial published study in favor of the efficacy of such restrictions used sound scientific methodology. Those who identified themselves as “strongly opposed” or “weakly opposed” to stricter gun control laws on average estimated that it was between “extremely unlikely” and “somewhat unlikely” that the study used sound methodology (the first and second of five options). (This sample size was quite small, however.) Those who identified themselves as “strongly in favor” of more gun control restrictions on average estimated that it was “somewhat likely” that it did (the fourth of five options). This is despite the fact, that, presumably, no one had more knowledge of the details of the study than anyone else.


This bias is made worse by selective exposure, which involves actively seeking out information to support one’s current positions, and avoiding information that does not. This effect may be particularly strong in the Internet era.

This bias is quite robust, and even affects things like our memories. Moreover, it can persist even after the beliefs in question have been discredited.

Practical consequences

The practical disadvantages are, I think, somewhat obvious. They are especially pernicious in creating polarized groups, as we are currently experiencing in American politics. They can even lead to widespread distrust of whole sources of information, as with accusations of “media bias”.

pali In 1982, students were shown the same news clip covering the Sabra and Shatila massacre of Palestinian refugees in Lebanon, allegedly abetted by the Israeli army. Students who identified themselves as pro-Israeli regarded the clip as having made more anti-Israel claims and fewer pro-Israel claims than students who identified themselves as pro-Palestinian. Both sides predicted that a “neutral observer” would be influenced against their position by the clip. (Vallone, Ross and Lepper 1985)

Of course, media can be biased—in fact, everyone is biased—but it is unlikely that the biases in media line up perfectly against your own present positions.

4. Conjunction fallacy

Although usually studied along with cognitive biases, the conjunctive fallacy is technically a formal fallacy, logically speaking. Hence the name.

The conjunction fallacy occurs when a conjunction or combination of two things is thought to be more probable than one of the things on its own, especially when the other thing fits together within a perceived pattern or expected narrative.

According to the laws of probability, the probability of P and Q is equal to the probability of P times the probability of Q. Since probability values range from 0 to 1, the probability of P and Q cannot be higher than that of P by itself. Hence, this is always an error.

linda In the famous “Linda experiment”, subjects were told:

Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.

Subjects were asked to rank the probability of a number of things, including:
1. Linda is a bank teller.
2. Linda is a bank teller and is active in the feminist movement.
It is not possible for (2) to be more likely than (1), but 89% of participants rated (2) more likely than (1). (Tversky and Kahneman 1983)

In a very similar experiment in this class, 86% of you rated (2) more likely than (1).

The exact interpretation of this bias is controversial. Some things to consider:

Practical consequences

There is some reason to think that this bias is less prevalent in real-life or high-stakes situations.

However, Tversky and Kahneman also found (in 1983) that policy experts thought that “The Soviet Union will invade Poland and the US will cut off diplomatic relations with the Soviet Union” was more likely than simply “the US will cut off diplomatic relations with the Soviet Union”. Similar results have been replicated more recently. It is again unclear how to interpret these results.

5. Framing effects

A framing effect is the tendency for people to respond to or process information differently depending on the way it is presented.

This is a very wide category of effects, resulting in a wide variety of biases.

syringe In one study, half the participants were asked to choose between two possible medical responses to a hypothetical disease which threatens the life of 600 people. If given the choice between:

Option (A): 200 people will be saved.
Option (B): There is 1/3 probability of saving all 600, but a 2/3 probability that no one will be saved.

Presented with those options, 72% prefer option A. However, when asked to make the equivalent choice:

Option (C): 400 people will die.
Option (D): There is a 1/3 probability no one will die, and a 2/3 probability that all 600 will die.

heart In that case, 78% preferred option D. (Tversky and Kahneman 1981)

(In this class, this year, almost no effect was seen, however, with around 50% choosing option (A) either way it was asked. Last year, however, when asked the question in terms of “saving”, about 70% chose the first option, and when asked in terms of “dying” the split was about 50/50.)

In another study, when students were asked how happy they were in general, and then later asked how many dates they had been on in the past month, there was no statistically significant correlation in the results.

When asked first how many dates they had been on in the past month, and then asked how happy they were in general, there was an extremely strong correlation between their answers. (Kahneman and Kruger 2006)

Practical consequences

This shows that it does matter, e.g., whether we call something an act of terrorism, or just crime, call a group of protestors a movement, or mob, a change to the tax codes as tax relief or tax break, and so on.

Again, it shows we should pay attention to how information is presented to us, how it is being framed.

6. Fundamental attribution error

The fundamental attribution error is the tendency to attribute one’s own behavior, especially one’s own failings, to situational and temporary factors, but attribute others’ behavior, especially failings, to internal and unchanging factors about them as individuals.

fry Your friend was late for your party because she is inconsiderate. But you were late for class because traffic prevented you from getting here on time.

When that guy cut you off in traffic it was because he’s a self-centered jerk; but when you accidentally cut someone off last week it was because your vision was blocked by that truck.

The classic study on this is Jones and Harris 1967.


Practical consequences

Like some other biases, this one is self-serving. We tend to protect our own images of ourselves. Criticism however often feels personal because it is, compared to our own understanding of ourselves.

This can be particularly nasty when combined with in-group biases and the like: we blame the actions of people from outside groups on their “otherness” but never ourselves based on similar factors.

Beware of others who might blame you for things outside of your control, but be careful also not to blame others for things that might have been outside of theirs.

7. Hindsight bias

Hindsight bias (or outcome bias) is the inclination to estimate that the earlier expected probability, or predictability, of an event to be higher than it actually was after the event has in fact occurred.

nixon Before President Nixon’s historic 1972 visit to China, people were polled about what they expected the outcome to be. Afterwards, the participants were asked to recall or reconstruct their past predictions. The actual outcomes of the trip were judged as much more likely in the second poll. (Fischhoff and Beyth 1975)

In another study, participants were asked to assess whether or not a doctor had committed malpractice. The events leading up to the doctor’s decisions were described, as were the outcomes. Otherwise identical events and decisions were labeled as malpractice much more often when the outcome was bad. (Hugh and Tracy 2002)

Practical consequences

We need to be careful about not judging past decisions on the basis of their outcomes, or by using knowledge that came to light later.

This bias is clearly related to our next one, if we consider it as a kind of overconfidence about what one’s own thoughts would have been in the past.

8. Overconfidence effect

The overconfidence effect is the tendency to estimate one’s own abilities, knowledge and cognitive accuracy as higher than it in fact is, especially when done with confidence.

driver Svenson (1981) finds that 93% of drivers estimate that they are better than average drivers.

In this class, 88% of you estimated yourselves as average or better drivers, though many self-reported as average. Not a single one of you self-reported as being the lowest category (among the worst drivers).

After being asked to spell certain words, those who predicted 100% accuracy for their results averaged around 80% accuracy. Those who predicted 70% accuracy averaged around 60% accuracy. (The discrepancy was lower for those with less confidence in their levels of accuracy.) (Adams and Adams 1960)

Exactly how this bias operates seems to differ for different kinds of tasks and abilities.

Practical consequences


It is best to gain some experience with something before attempting to assess one’s own abilities.

The Dunning-Kruger effect (Dunning and Kruger 1999) refers to the effect that very unskilled individuals typically lack the competence to correctly assess their own abilities, whereas more realistic assessments come with more skill. Sometimes the most skilled individuals are the most doubtful about their own skills, as they are best able to identify even minor flaws. It is still somewhat controversial, however, how to interpret these results.

9. Sunk cost effect

The sunk cost effect is the tendency to maintain a strategy or policy into which one has already invested significant resources or effort even when a better or less costly alternative is available.

concorde Suppose you go to a movie, and you find half-way through that you hate it. However, you’ve already spent the money, so you sit there and suffer through it anyway.

The British and French government started investing in a supersonic transatlantic aircraft, the “Concorde” in the 1950s. They had already invested the equivalent of billions of dollars in taxpayer money by the time the first flights were possible in the early 1970s. It soon became apparent that the flights would never be cost-effective. However, since so much money was already invested, they continued to keep the project afloat, losing more money in the process, until 2003.

The sunk cost effect is clearly related to confirmation bias, and our tendency to rationalize our past decisions and attitudes, even when evidence is presented against their wisdom.

Practical consequences

We do not like to admit failure, but it is often better to do so. Otherwise, this can keep us in bad relationships, miserable careers, unpromising endeavors, and so on.

Do you think the biases we have discussed above relate to things like gender bias, racial bias, ethnic bias, age bias, and so on? How so?

I. Understanding and Combating Biases


Some things to keep in mind:

It is not practical, and probably not even desirable, to attempt to completely forgo the use of heuristics and “System 1” thinking.

confuse I asked some of you to solve this problem.

Suppose there are four index cards. Each one has a letter on one side, and a numeral on the other. They are lying on a table, and you can only see one side of each. Some have letters showing, some numerals, like so:
E 3 8 T
Which do you need to turn over in order to determine whether or not the rule holds, “If the letter on a card is a vowel, the numeral on the other side is even”?
(A) Only “E”
(B) Only “E” and “3”.
(C) Only “E” and “8”.
(D) All four.

Only 18% of you got the right answer, which is worse than what one would expect from random guessing. In controlled experiments of a similar task, the success rate is even lower: around 10%. (Wason 1977)

mclove Compare this problem, which I asked others in the class:

It is your job to enforce this rule at your company picnic: “If someone is drinking alcohol, (s)he must be 21 or older.” You are doing your best, but you have incomplete information about the following people:

Erica is drinking wine, but you don’t know how old she is.
Menesh is 19, but you don’t know what he’s drinking.
Rosa is 23, but you don’t know what she’s drinking.
Scott is drinking water, but you don’t know how old he is.

Among the above, who do you need to find out more about to make sure the rule is being enforced?

(A) Just Erica.
(B) Just Erica and Menesh.
(C) Just Erica and Rosa.
(D) All four.

Logically, these problems are exactly the same, and they have the same answer: (B). Yet, the second seems much easier, and indeed 79% of those in this class who were asked this version of the question got it right. Why? Psychologists do not agree on this. (Cosmides and Tooby 1992; Davies, Fetzer and Foster 1995).

Perhaps, however, it is easier because we have “System 1 instincts” that can be made use of in the second version, whereas the first can only be approached with System 2 thinking, and nothing else.


Some suggestions for avoiding bias:

  1. Slow down; plan ahead and make sure you have enough time to study an issue thoroughly. Do not cooperate with anyone unreasonably putting time pressure on your decisions.
  2. Learn about and study biases, and how they affect issues and decisions similar to those in the situation you are in. Knowing about biases helps, but does not help much unless you consider the application to the particular situation.
  3. Formalize the problem; consider it in terms of abstract values and variable parameters, but only if doing so doesn’t introduce the possibility of new errors. (In other words, don’t do so unless you are technically competent in the methodology used, like Bayes’ theorem or symbolic logic.) If available, computer algorithms could be used.
  4. Do not trust your “feelings” of confidence that you are not biased on the issue. Don’t ignore your gut feelings or “intuitions” entirely, but consider where they might be coming from.
  5. Do what you can to hide, even from yourself, irrelevant aspects of the data or information you are considering. (Compare, e.g., the process of anonymous review academic journals use.)
  6. Even if the issue is an emotionally significant one, stay calm and relaxed. Stress is your enemy.
  7. Content people do better than unhappy people, but active frowning might also help. (It is a less automatic process than smiling, and it is theorized that this tends to activate System 2.)
  8. Consider alternative perspectives, and actively seek out opinions different from yours and try to think charitably about arguments in favor of them.
  9. Get feedback from others.
  10. After reaching a conclusion, force yourself to reach another; i.e., force yourself to give yourself a “second opinion”. Then compare the two.
  11. Fight bias with biases if need be. For example, tell yourself a vivid anecdote about someone who succumbed to bias in the situation you’re in to make that danger highly “available” to your memory.

Can you think of other techniques that might be successful in combatting bias? Should we consider the biases separately?

J. Rethinking the Heuristics and Biases Model

Although it has achieved a lot of attention by psychologists and others, the heuristics and biases theory championed by Tversky, Kahneman and others is not without its critics.

Some loosely worded concerns:

1. Are we really that dumb?


The heuristics and biases approach seems to paint a bleak picture of human rationality. But aren’t there rational justifications for many of these patterns?

Isn’t some level of overconfidence necessary to have the motivation to try new things? Isn’t some level of sunk-cost determination necessary for succeeding after a pattern of failure, and learning from them? Doesn’t it make sense that readily available items in memory can trump more distant concerns?

And even if the heuristics can lead us in to error sometimes, aren’t their successes more numerous than the errors?

Surely the evolutionary basis of these patterns of cognition had some survival value for our ancestors, at the very least.

Indeed, the very notion of experimental evidence of irrationality is suspect.
Is there a difference between evidence that someone is irrational, and evidence that you’ve misunderstood them?

2. Do the experiments relied on here actually reflect “real life” cognition in other environments?


The experiments we’ve discussed can feel artificial, almost like abstract puzzles or word games.

Moreover, the process of taking an experimental test or being asked hypothetical questions is very artificial. (How many of you would have actually given me the insurance money immediately afterwards if I had actually asked for it?)

We have already seen how removal from concrete and relatable reasoning situations can deprive us of certain cognitive tools.

Are people really giving correct answers, only to different questions from those the experimenters thought they were asking? [Response: But isn’t the failure of understanding possibly just another name for the bias?]

3. Where does the model of “ideal rationality” which biases lead us away from come from?


If cognitive psychology teaches us how we do think, naturally, how can it be said to be imperfect or wrong?

Could the models that come from formal logic or probability theory themselves be slanted, or problematic, too? Is there one single system we can know by some method, philosophical or otherwise, to be “ideal rationality”? Aren’t the theories given so far still controversial?

What makes an error an error?

© 2024 Kevin C. Klement