Confirmation bias


Confirmation bias is the tendency to search for, interpret, favor and recall information in a way that confirms or supports one's prior beliefs or values. People display this bias when they select information that supports their views, ignoring contrary information or when they interpret ambiguous evidence as supporting their existing attitudes. The effect is strongest for desired outcomes, emotionally charged issues and deeply entrenched beliefs.
Biased search for information, biased interpretation of this information and biased memory recall have been invoked to explain four specific effects:
  1. attitude polarization
  2. belief perseverance
  3. the irrational primacy effect
  4. illusory correlation.
A series of psychological experiments in the 1960s suggested that people are biased toward confirming their existing beliefs. Later work reinterpreted these results as a tendency to test ideas in a one-sided way, focusing on one possibility and ignoring alternatives. Explanations for the observed biases include wishful thinking and the limited human capacity to process information. Another proposal is that people show confirmation bias because they are pragmatically assessing the costs of being wrong rather than investigating in a neutral, scientific way.
Flawed decisions due to confirmation bias have been found in a wide range of political, organizational, financial and scientific contexts. These biases contribute to overconfidence in personal beliefs and can maintain or strengthen beliefs in the face of contrary evidence. For example, confirmation bias produces systematic errors in scientific research based on inductive reasoning. Similarly, a police detective may identify a suspect early in an investigation, but then may only seek confirming rather than disconfirming evidence. A medical practitioner may prematurely focus on a particular disorder early in a diagnostic session, and then seek only confirming evidence. In social media, confirmation bias is amplified by the use of filter bubbles and "algorithmic editing", which display to individuals only information they are likely to agree with, while excluding opposing views.

Definition and context

Confirmation bias, previously used as a "catch-all phrase", was refined by English psychologist Peter Wason, as "a preference for information that is consistent with a hypothesis rather than information which opposes it."
Confirmation biases are effects in information processing. They differ from what is sometimes called the behavioral confirmation effect, commonly known as self-fulfilling prophecy, in which a person's expectations influence their own behavior, bringing about the expected result.
Some psychologists restrict the term "confirmation bias" to selective collection of evidence that supports what one already believes while ignoring or rejecting evidence that supports a different conclusion. Others apply the term more broadly to the tendency to preserve one's existing beliefs when searching for evidence, interpreting it, or recalling it from memory. Confirmation bias is a result of automatic, unintentional strategies rather than deliberate deception.

Types

Biased search for information

Experiments have found repeatedly that people tend to test hypotheses in a one-sided way, by searching for evidence consistent with their current hypothesis. Rather than searching through all the relevant evidence, they phrase questions to receive an affirmative answer that supports their theory. They look for the consequences that they would expect if their hypothesis were true, rather than what would happen if it were false. For example, someone using yes/no questions to find a number they suspect to be the number 3 might ask, "Is it an odd number?" People prefer this type of question, called a "positive test", even when a negative test such as "Is it an even number?" would yield exactly the same information. However, this does not mean that people seek tests that guarantee a positive answer. In studies where subjects could select either such pseudo-tests or genuinely diagnostic ones, they favored the genuinely diagnostic.
The preference for positive tests in itself is not a bias, since positive tests can be highly informative. However, in combination with other effects, this strategy can confirm existing beliefs or assumptions, independently of whether they are true. In real-world situations, evidence is often complex and mixed. For example, various contradictory ideas about someone could each be supported by concentrating on one aspect of his or her behavior. Thus any search for evidence in favor of a hypothesis is likely to succeed. One illustration of this is the way the phrasing of a question can significantly change the answer. For example, people who are asked, "Are you happy with your social life?" report greater satisfaction than those asked, "Are you unhappy with your social life?"
Even a small change in a question's wording can affect how people search through available information, and hence the conclusions they reach. This was shown using a fictional child custody case. Participants read that Parent A was moderately suitable to be the guardian in multiple ways. Parent B had a mix of salient positive and negative qualities: a close relationship with the child but a job that would take them away for long periods of time. When asked, "Which parent should have custody of the child?" the majority of participants chose Parent B, looking mainly for positive attributes. However, when asked, "Which parent should be denied custody of the child?" they looked for negative attributes and the majority answered that Parent B should be denied custody, implying that Parent A should have custody.
Similar studies have demonstrated how people engage in a biased search for information, but also that this phenomenon may be limited by a preference for genuine diagnostic tests. In an initial experiment, participants rated another person on the introversion–extroversion personality dimension on the basis of an interview. They chose the interview questions from a given list. When the interviewee was introduced as an introvert, the participants chose questions that presumed introversion, such as, "What do you find unpleasant about noisy parties?" When the interviewee was described as extroverted, almost all the questions presumed extroversion, such as, "What would you do to liven up a dull party?" These loaded questions gave the interviewees little or no opportunity to falsify the hypothesis about them. A later version of the experiment gave the participants less presumptive questions to choose from, such as, "Do you shy away from social interactions?" Participants preferred to ask these more diagnostic questions, showing only a weak bias towards positive tests. This pattern, of a main preference for diagnostic tests and a weaker preference for positive tests, has been replicated in other studies.
Personality traits influence and interact with biased search processes. Individuals vary in their abilities to defend their attitudes from external attacks in relation to selective exposure. Selective exposure occurs when individuals search for information that is consistent, rather than inconsistent, with their personal beliefs. An experiment examined the extent to which individuals could refute arguments that contradicted their personal beliefs. People with high confidence levels more readily seek out contradictory information to their personal position to form an argument. This can take the form of an oppositional news consumption, where individuals seek opposing partisan news in order to counterargue. Individuals with low confidence levels do not seek out contradictory information and prefer information that supports their personal position. People generate and evaluate evidence in arguments that are biased towards their own beliefs and opinions. Heightened confidence levels decrease preference for information that supports individuals' personal beliefs.
Another experiment gave participants a complex rule-discovery task that involved moving objects simulated by a computer. Objects on the computer screen followed specific laws, which the participants had to figure out. So, participants could "fire" objects across the screen to test their hypotheses. Despite making many attempts over a ten-hour session, none of the participants figured out the rules of the system. They typically attempted to confirm rather than falsify their hypotheses, and were reluctant to consider alternatives. Even after seeing objective evidence that refuted their working hypotheses, they frequently continued doing the same tests. Some of the participants were taught proper hypothesis-testing, but these instructions had almost no effect.

Biased interpretation of information

Confirmation biases are not limited to the collection of evidence. Even if two individuals have the same information, the way they interpret it can be biased.
A team at Stanford University conducted an experiment involving participants who felt strongly about capital punishment, with half in favor and half against it. Each participant read descriptions of two studies: a comparison of U.S. states with and without the death penalty, and a comparison of murder rates in a state before and after the introduction of the death penalty. After reading a quick description of each study, the participants were asked whether their opinions had changed. Then, they read a more detailed account of each study's procedure and had to rate whether the research was well-conducted and convincing. In fact, the studies were fictional. Half the participants were told that one kind of study supported the deterrent effect and the other undermined it, while for other participants the conclusions were swapped.
The participants, whether supporters or opponents, reported shifting their attitudes slightly in the direction of the first study they read. Once they read the more detailed descriptions of the two studies, they almost all returned to their original belief regardless of the evidence provided, pointing to details that supported their viewpoint and disregarding anything contrary. Participants described studies supporting their pre-existing view as superior to those that contradicted it, in detailed and specific ways. Writing about a study that seemed to undermine the deterrence effect, a death penalty proponent wrote, "The research didn't cover a long enough period of time," while an opponent's comment on the same study said, "No strong evidence to contradict the researchers has been presented." The results illustrated that people set higher standards of evidence for hypotheses that go against their current expectations. This effect, known as "disconfirmation bias", has been supported by other experiments.
Another study of biased interpretation occurred during the 2004 U.S. presidential election and involved participants who reported having strong feelings about the candidates. They were shown apparently contradictory pairs of statements, either from Republican candidate George W. Bush, Democratic candidate John Kerry or a politically neutral public figure. They were also given further statements that made the apparent contradiction seem reasonable. From these three pieces of information, they had to decide whether each individual's statements were inconsistent. There were strong differences in these evaluations, with participants much more likely to interpret statements from the candidate they opposed as contradictory.
In this experiment, the participants made their judgments while in a magnetic resonance imaging scanner which monitored their brain activity. As participants evaluated contradictory statements by their favored candidate, emotional centers of their brains were aroused. This did not happen with the statements by the other figures. The experimenters inferred that the different responses to the statements were not due to passive reasoning errors. Instead, the participants were actively reducing the cognitive dissonance induced by reading about their favored candidate's irrational or hypocritical behavior.
Biases in belief interpretation are persistent, regardless of intelligence level. Participants in an experiment took the SAT test to assess their intelligence levels. They then read information regarding safety concerns for vehicles, and the experimenters manipulated the national origin of the car. American participants provided their opinion if the car should be banned on a six-point scale, where one indicated "definitely yes" and six indicated "definitely no". Participants firstly evaluated if they would allow a dangerous German car on American streets and a dangerous American car on German streets. Participants believed that the dangerous German car on American streets should be banned more quickly than the dangerous American car on German streets. There was no difference among intelligence levels at the rate participants would ban a car.
Biased interpretation is not restricted to emotionally significant topics. In another experiment, participants were told a story about a theft. They had to rate the evidential importance of statements arguing either for or against a particular character being responsible. When they hypothesized that character's guilt, they rated statements supporting that hypothesis as more important than conflicting statements.