What is it called when people seeking information tend to prefer sources that support rather than conflict with their existing opinions?

Confirmation bias is defined as “seeking or interpreting evidence in ways that are preferential to existing beliefs, expectations, or hypotheses” (Nickerson, 1998, p. 175).

From: Understanding Female Offenders, 2021

Emerging Issues and Future Directions

Caleb W. Lack, Jacques Rousseau, in Comprehensive Clinical Psychology (Second Edition), 2022

11.04.4.1.1 Confirmation Bias

Confirmation biases are some of the most encountered, frustrating, and yet understandable biases (Nickerson, 1998). It is the tendency of individuals to favor information that confirms their beliefs or ideas and discount that which does not. This means that, when confronted with new information, we tend to do one of two things. If this information confirms what we already believe, our natural instinct is to accept it as true, accurate, and unbiased. We unreservedly accept it and are happy to have been shown it. Even if it has some problems, we forgive and forget those and incorporate this new information into our beliefs and schemas quickly. We are also more likely to recall this information later, to help buttress our belief during an argument. On the other hand, if this newly encountered information contradicts what we already believe, we have a very natural different response. We become highly critical and defensive immediately, nitpicking any possible flaw in the information, even though the same flaw would be ignored if the information confirmed our beliefs. It also fades quickly from our mind, so that in the future we cannot even recall being exposed to it.

As an example, consider that you believe that corporal punishment, such as spanking, is an effective way to discipline a child who is acting out. When you see a family member spank a child when they aren't listening to what they are told, and then the child goes and does what they were told, your brain latches onto that, and you say to yourself “I knew it works!” But later you are scrolling through your preferred social media feed, and you see a friend has shared a meta-analysis spanning five decades of research that comes to the conclusion that the more children are spanked, the more likely they are to be defiant toward their parents, as well as have increases in anti-social and aggressive behavior, MHP, and cognitive difficulties (Gershoff and Grogan-Kaylor, 2016). Since that doesn't fit with your already formed belief, you are likely to discount it in some way (e.g., “I was hit and I turned out just fine!” or “They must have ignored all the studies that support spanking in their meta-analysis!”).

In many ways, the confirmation bias undergirds the entire reason why scientific methodology needed to be developed in the first place. We naturally try to find information that supports and proves our beliefs, which can, in turn, lead to the wholesale discounting or ignoring of contradictory evidence. Science, in contrast, actively tries to disprove ideas. The scientific method allows for increased confidence in our findings and makes scientists less prone to the confirmation bias (at least, theoretically speaking and in their scientific work). But humans do not naturally think in a scientific manner, which helps make pop and pseudo-psychology so much easier to understand and absorb. And, once believed, it can be very difficult to shift someone's ideas (Ahluwalia, 2000; Nyhan and Reifler, 2010). But how do we get to that belief in the first place?

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128186978000522

Constructing a Victim Profile

Brent E. Turvey, Jodi Freeman, in Forensic Victimology (Second Edition), 2014

1 Forensic Examiners Must Strive Diligently to Avoid Bias

Dr. Paul Kirk wrote of forensic examination, “Physical evidence cannot be wrong; it cannot be perjured; it cannot be wholly absent. Only in its interpretation can there be error” (Kirk and Thornton, 1970; p. 4). With this simple observation, Kirk was referring to the influences of examiner ignorance, imprecision, and bias on the reconstruction of physical evidence and its meaning. The evidence is always there, waiting to be understood. The forensic examiner is the imprecise lens through which a form of understanding comes.

Specifically, there are at least two kinds of bias that objective forensic examiners need to be aware of (and strive to mitigate) in their casework: observer effects and confirmation bias.

Observer effects are present when the results of a forensic examination are distorted by the employment context and mental state of forensic examiners, to include subconscious expectations and desires imposed by their employers, supervisors, and workmates (Cooley and Turvey, 2011; Dror, Charlton, and Peron, 2006; and Risinger et al., 2002). Observer effects are governed by fundamental principles of cognitive psychology asserting that subconscious needs and expectations, which are heavily influenced by external pressures and expectations, work to shape both examiner perception and interpretation. As the term subconscious implies, this happens without the awareness of the forensic examiner. In the context of a forensic examination, this includes a distortion of what is recognized as evidence, what is collected, what is examined, and how it is interpreted.

Confirmation bias may be described as the conscious or unconscious tendency to affirm particular theories, opinions, or outcomes or findings. It is a specific kind of bias in which information and evidence are screened to include those things that confirm a desired position. At the same time, the examiner actively ignores, does not seek, or undervalues the relevance of anything that contradicts that position. It commonly manifests itself in the form of looking only for particular kinds of evidence that support a desired case theory (i.e., suspect guilt or innocence); and actively explaining away evidence or findings that are undesirable. This includes the selection of evidence to examine by persons advocating a particular theory, or by persons interested in keeping investigative costs low.

CASE EXAMPLE

For example, consider the case of 30-year-old Brandon Headley out of Prattville, Alabama. Though the victim of a homicidal shooting, questions were raised at trial regarding his culpability. The shooter, Erik Scoggins, claimed that he was acting in self-defense (see Figure 4-2). A determination of this issue rested in no small part on victimology in the form of evidence related to drug use, as reported in Roney (2013):

The victim in an Autauga County homicide case had used marijuana shortly before he was shot in March 2011, documents from the state crime lab showed.

Erik John Scoggins, 30, of Autauga County faces murder charges in the shooting death of Brandon Headley, 30, also of Autauga County. Scoggins has told authorities he was acting in self-defense the night of March 18, 2011, after Headley lunged at him while brandishing a large socket wrench.

The trial entered its second day Wednesday. A blood screen performed after Headley’s death came back positive for the presence of marijuana, Justin Sanders of the Alabama Department of Forensic Sciences testified. Sanders told the court Headley would have used the marijuana 40 minutes to 4½ hours before his death. On direct examination, Assistant District Attorney Jessica Sanders asked Justin Sanders how marijuana would affect the behavior and mental condition of the user.

Justin Sanders said some people would display a relaxed demeanor and others might show paranoid behavior. Justin Sanders said he could not determine from the blood test whether Headley was under the influence of the drug when he died.

Scoggins admits shooting Headley after he went to Headley’s home on Viking Lane about midnight that night to speak with his ex-girlfriend, Rachel Avant.

In a taped interview with sheriff’s office investigators played earlier in the day, Scoggins said the men got into a verbal altercation in Headley’s yard.

Scoggins told investigators he convinced Avant to leave and go to her home so he could retrieve some of his things from the residence. Scoggins told investigators that when he drove away, Headley followed in his vehicle and Avant followed behind Headley in her vehicle.

As he stopped at the intersection of Autauga County 57, Scoggins said Headley approached his truck holding a wrench. Scoggins said he had stepped out of the truck to wave Avant around, when Headley lunged at him brandishing the bar. That’s when Scoggins shot him once in the chest with a 40-caliber handgun, Scoggins told investigators.

What is it called when people seeking information tend to prefer sources that support rather than conflict with their existing opinions?

FIGURE 4-2. Erik Scoggins was arrested and tried for the shooting death of Brandon Headley in Prattville, Alabama. At trial, the victim’s use of marijuana was used to suggest opposing theories by attorneys looking to make their case.

In the Headley case, evidence of victim drug use at or near the time of the shooting could be interpreted a number of ways, but the jury is needs to have the information regardless. This helps them to make the most informed decision possible. The prosecution would use this evidence and related testimony to argue that the victim was likely relaxed at the time of the shooting, suggesting that the shooter was the aggressor. The defense would use it to argue that the victim was possibly acting paranoid and erratic, suggesting that the shooter was acting in self-defense. Both sides see the evidence through the lens of their theories and look for that which confirms it. Without further testimony from those familiar with the victim’s typical reactions to marijuana, the toxicological information by itself could actually be misleading.

Wrestling with confirmation bias is extremely difficult, often because it is institutional. Many forensic examiners work in systems in which they are rewarded with praise and promotion for successfully advocating their side when true science is about anything other than successfully advocating any one side (Turvey, 2013). Consequently, the majority of forensic examiners suffering from confirmation bias has no idea what it is or that it is even a problem.

What all forensic examiners must understand is that their value to the justice system lies in their adherence to the scientific method, and that this demands as much objectivity and intellectual honesty as can be brought to bear. Success in the forensic community must be measured by the diligent elimination of possibilities through the scientific method and peer review, not through securing convictions. In other words, the objective forensic examiner is not out to get the bad guy, or to prove that there is a bad guy, but rather to help determine what actually happened and under what circumstances, using only the most reliable evidence.

Therefore, a strict adherence to scientific methodology is more important than any actual outcome, given that scientific examination seeks reliable knowledge and not merely the confirmation of preconceived ideas. In an objective approach, outcomes are less important than the reliability of the methods of inquiry that are being used.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124080843000041

Core Network Principles

Warren W. Tryon, in Cognitive Neuroscience and Psychotherapy, 2014

Confirmation Bias

Illusory correlation is also driven by confirmation bias; another defective heuristic that operates outside of awareness (Baron, 2000). Confirmation bias refers to our tendency to let subsequent information confirm our first impressions (Baron, 2000). Hence, the same subsequent information can confirm different points of view depending upon what our first impression was. Alternatively stated, we are preferentially sensitive to and cherry-pick facts that justify decisions we make and hypotheses that we favor, and are similarly insensitive to facts that either fail to support or contradict decisions we make and hypotheses that we favor. And the best part is that all of this continuously operates unconsciously; outside of our awareness. This heuristic has been called the Positive Test Strategy and is illustrated next.

Snyder and Cantor (1979) described a fictitious person named Jane. To one group Jane was described as an extravert; to another group Jane was described as an introvert. A couple of days later, half the participants were asked to evaluate Jane for an extraverted job of a real estate broker and half were asked to evaluate her for an introverted job of librarian. Evaluations for the real estate job contained more references to Jane’s extraversion whereas evaluations for the introverted job contained more references to her introversion. This finding implies the use of a positive test strategy when trying to remember things about Jane. This cognitive heuristic is also caused by the neural network property of preferring consonance and coherence over dissonance that we will discuss as our Principle 7.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B978012420071500003X

Self-uncertainty and group identification: Consequences for social identity, group behavior, intergroup relations, and society

Michael A. Hogg, in Advances in Experimental Social Psychology, 2021

7.1 Communication, influence, and identity confirmation

People can obtain this information from “manifestos,” descriptive texts, and social media platforms. However, they (also) typically rely on direct or indirect observation of and communication with fellow ingroup members, particularly those whom they feel embody the group best and can be trusted most (e.g., Belavadi & Hogg, 2019). People can spend substantial time engaged in what can be called “norm talk”—communicating directly or indirectly, mainly with fellow ingroup members, about “who we are,” in order to be sure of the group's defining and prescriptive identity attributes (Hogg & Giles, 2012).

This identity knowledge needs to be subjectively true, to the extent that it is grounded in perceived agreement among ingroup members and represents a shared reality or worldview that is not entirely discordant with actual reality (see Hogg & Rinella, 2018). Overall, and particularly under uncertainty, people prefer information that confirms their beliefs about their ingroup's identity and thus their own identity. People have a strong confirmation bias (e.g., Wason, 1960)—they avoid or discredit information and information sources that do not confirm who they think they are (e.g., Frimer, Skitka, & Motyl, 2017).

Paul Simon and Art Garfunkel elegantly capture confirmation bias in the lyrics of their classic 1970 song “The Boxer”: “… a man hears only what he wants to hear, and disregards the rest”; and more recently, McKay Coppins, a staff writer for The Atlantic, gained access to a 2020 MAGA (Make America Great Again) rally in Mississippi and spoke to some Donald Trump supporters. He writes:

A 34 year-old maintenance worker who had an American flag wrapped around his head, observed that Trump …. Had said things no other politicians would say. When I asked him if it mattered whether those things were true, he thought for a moment before answering. “He tells you what you want to hear” …. “And I don't know if it's true or not—but it sounds good, so fuck it” (Coppins, 2020, p. 39).

Social media and the internet greatly facilitate confirmation bias; enabling people to “safely” and easily inhabit identity-sustaining echo chambers that are impervious to alternative shared realities, worldviews and identities (Barberá, Jost, Nagler, Tucker, & Bonneau, 2015; Colleoni, Rozza, & Arvidsson, 2014; Peters, Morton, & Haslam, 2010).

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/S0065260121000150

What Does It Mean to be Biased

Ulrike Hahn, Adam J.L. Harris, in Psychology of Learning and Motivation, 2014

2 When is a Bias a Bias?

2.1 Understanding Bias: Scope, Sources, and Systematicity

We begin our example-based discussion with a very general bias which, if robust, would provide direct evidence of motivated reasoning, namely “wishful thinking.” Under this header, researchers (mostly in the field of judgment and decision-making) group evidence for systematic overestimation in the perceived probability of outcomes that are somehow viewed as desirable, as opposed to undesirable.

In actual fact, robust evidence for such a biasing effect of utilities or values on judgments of probability has been hard to come by, despite decades of interest, and the phenomenon has been the dubbed “the elusive wishful thinking effect” (Bar-Hillel & Budescu, 1995). Research on wishful thinking in probability judgment has generally failed to find evidence of wishful thinking under well-controlled laboratory conditions (see for results and critical discussion of previous research, e.g., Bar-Hillel & Budescu, 1995; Bar-Hillel, Budescu, & Amar, 2008; Harris, Corner, & Hahn, 2009). There have been observations of the “wishful thinking effect” outside the laboratory (e.g., Babad & Katz, 1991; Simmons & Massey, 2012). These, however, seem well explained as “an unbiased evaluation of a biased body of evidence” (Bar-Hillel & Budescu, 1995, p. 100, see also Gordon, Franklin, & Beck, 2005; Kunda, 1990; Morlock, 1967; Radzevick & Moore, 2008; Slovic, 1966). For example, Bar-Hillel et al. (2008) observed potential evidence of wishful thinking in the prediction of results in the 2002 and 2006 football World Cups. However, further investigation showed that these results were more parsimoniously explained as resulting from a salience effect than from a “magical wishful thinking effect” (Bar-Hillel et al., 2008, p. 282). Specifically, they seemed to stem from a shift in focus that biases information accumulation and not from any direct biasing effect of desirability. Hence, there is little evidence for a general “I wish for, therefore I believe…” relationship (Bar-Hillel et al., 2008, p. 283) between desirability and estimates of probability. Krizan and Windschitl's (2007) review concludes that while there are circumstances that can lead to desirability indirectly influencing probability estimates through a number of potential mediators, there is little evidence that desirability directly biases estimates of probability.

What is at issue here is the systematicity of the putative bias—the difficulty of establishing the presence of the bias across a range circumstances. The range of contexts in which a systematic deviation between true and estimated value will be observed depends directly on the underlying process that gives rise to that mismatch. Bar-Hillel and Budescu's (1995) contrast between “an unbiased evaluation of a biased body of evidence” and a “magical wishful thinking effect” reflects Macdougall's (1906) distinction between “primary” and “secondary bias,” namely a contrast between selective information uptake and a judgmental distortion of information so acquired.

Both may, in principle, give rise to systematic deviations between (expected) estimate and true value; however, judgmental distortion is more pernicious in that it will produce the expected deviation much more reliably. This follows readily from the fact that selective uptake of information cannot, by definition, guarantee the content of that information. Selectivity in where to look may have some degree of correlation with content, and hence lead to a selective (and truth distorting) evidential basis. However, that relationship must be less than perfect, simply because information uptake on the basis of the content of the evidence itself would require processing of that content, and thus fall under “judgmental distortion” (as a decision to neglect information already “acquired”).

In fact, selective attention to some sources over others can have a systematic effect on information content only where sources and content are systematically aligned and can be identified in advance.

Nevertheless, selectivity in search may lead to measurable decrements in accuracy if it means that information search does not maximize the expected value of information. In other words, even though a search strategy cannot guarantee the content of my beliefs (because there is no way of knowing whether the evidence, once obtained, will actually favor or disfavor my preferred hypothesis), my beliefs may systematically be less accurate because I have not obtained the evidence that could be expected to be most informative.

This is the idea behind Wason's (1960) confirmation bias. Though the term “confirmation bias,” as noted, now includes phenomena that do not concern information search (see earlier, Fischhoff & Beyth-Marom, 1983), but rather information evaluation (e.g., a potential tendency to reinterpret or discredit information that goes against a current belief, e.g., Lord et al., 1979; Nisbett & Ross, 1980; Ross & Lepper, 1980), Wason's original meaning concerns information acquisition. In that context, Klayman and Ha (1989) point out that it is essential to distinguish two notions of “seeking confirmation”:

1.

examining instances most expected to verify, rather than falsify, the (currently) preferred hypothesis.

2.

examining instances that—if the currently preferred hypothesis is true—will fall under its scope.

Concerning the first sense, “disconfirmation” is more powerful in deterministic environments, because a single counter-example will rule out a hypothesis, whereas confirming evidence is not sufficient to establish the truth of an inductively derived hypothesis. This logic, which underlies Popper's (1959) call for falsificationist strategies in science, however, does not apply in probabilistic environments where feedback is noisy. Here, the optimal strategy is to select information so as to maximize its expected value (see e.g., Edwards, 1965; and on the general issue in the context of science, see e.g., Howson & Urbach, 1996). In neither the deterministic nor the probabilistic case, however, is it necessarily wrong to seek confirmation in the second sense—that is, in the form of a positive test strategy. Though such a strategy led to poorer performance in Wason's (1960) study this is not generally the case and, for many (and realistic) hypotheses and environments, a positive test strategy is, in fact, more effective (see also, Oaksford & Chater, 1994).8 This both limits the accuracy costs of any “confirmation bias”9 and makes a link with “motivated reasoning” questionable.

Consideration of systematicity and scope of a putative bias consequently necessitates a clear distinction between the different component processes that go into the formation of a judgment and its subsequent report (whether in an experiment or in the real world). Figure 2.4 distinguishes the three main components of a judgment: evidence accumulation; aggregation, and evaluation of that evidence to form an internal estimate; and report of that estimate. In the context of wishful thinking, biasing effects of outcome utility (the desirability/undesirability of an outcome) can arise at each of these stages (readers familiar with Funder's (1995), realistic accuracy model of person perception will detect the parallels; likewise, motivated reasoning research distinguishes between motivational effects on information accumulation and memory as opposed to effects of processing, see e.g., Kunda, 1990). Figure 2.4 provides examples of studies concerned with biasing effects of outcome desirability on judgment for each of these component processes. For instance, demonstrations that participants’ use information about real-world base rate (Dai et al., 2008) or real world “representativeness” (Mandel, 2008) in judging the probability of events exemplify effects of outcome utility on the information available for the judgment: events that are extremely bad or extremely good are less likely in the real world than ones of moderate desirability, so that outcome utility provides information about frequency of occurrence which can be used to supplement judgments where participants are uncertain about their estimates.

What is it called when people seeking information tend to prefer sources that support rather than conflict with their existing opinions?

Figure 2.4. Locating indirect effects of utility (outcome desirability/undesirability) in the probability estimation process. Framed boxes indicate the distinct stages of the judgment formation process. Ovals indicate factors influencing those stages via which outcome utility can come to exert an effect on judgment. Numbers indicate experimental studies providing evidence for a biasing influence of that factor. Note that Dai, Wertenbroch, and Brendl (2008), Mandel (2008), and Harris et al. (2009) all find higher estimates for undesirable outcomes (i.e., “pessimism”).

Figure adapted from Harris et al. (2009).

Confirming our observations about the relative reliability of primary and secondary bias in generating systematic deviations, the different components of the judgment process vary in the extent to which they generally produce “wishful thinking” and several of the studies listed (see Fig. 2.3) have actually found “anti” wishful thinking effects, whereby undesirable events were perceived to be more likely.

Such mixed, seemingly conflicting, findings are, as we have noted repeatedly, a typical feature of research on biases (see e.g., Table 1 in Krueger & Funder, 2004). However, only when research has established that a deviation is systematic has the existence of a bias been confirmed and only then can the nature of that bias be examined. The example of base rate neglect above illustrated how examination of only a selective range of base rates (just low prior probabilities or just high prior probabilities) would have led to directly conflicting “biases.” The same applies to other putative biases.

In general, names of biases typically imply a putative scope: “wishful thinking” implies that, across a broad range of circumstances, thinking is “wishful.” Likewise, “optimistic bias” (a particular type of wishful thinking, see Sharot, 2012) implies that individuals’ assessments of their future are generally “optimistic.” Researchers have been keen to posit broad scope biases that subsequently do not seem to hold over the full range of contexts implied by their name. This suggests, first and foremost that no such bias exists.

To qualify as optimistically biased for example, participants should demonstrate a tendency to be optimistic across a gamut of judgments or at least across a particular class of judgments such as probability judgments about future life events (e.g., Weinstein, 1980; in keeping with Weinstein's original work we restrict the term “optimistic bias” to judgments about future life events in the remainder). However, while people typically seem optimistic for rare negative events and common positive events, the same measures show pessimism for common negative events and rare common events (Chambers et al., 2003; Kruger & Burrus, 2004). Likewise, for the better-than-average effect (e.g., Dunning, Heath, & Suls, 2004; Svenson, 1981), people typically think that they are better than their peers at easy tasks, but worse than their peers at difficult tasks (Kruger, 1999; Moore, 2007), and the false consensus effect (whereby people overestimate the extent to which others share their opinions, Ross, Greene, & House, 1977) is mirrored by the false uniqueness effect (Frable, 1993; Mullen, Dovidio, Johnson, & Copper, 1992; Suls, Wan, & Sanders, 1988).

One (popular) strategy for responding to such conflicting findings is to retain the generality of the bias but to consider it to manifest only in exactly those situations in which it occurs. Circumstances of seemingly contradictory findings then become “moderators,” which require understanding before one can have a full appreciation of the phenomenon under investigation (e.g., Kruger & Savitsky, 2004): in the case of the better-than-average effect therefore that moderator would be the difficulty of the task.

2.1.1 The Pitfalls of Moderators

Moderators can clearly be very influential in theory development, but they must be theoretically derived. Post hoc moderation claims ensure the unfalsifiability of science, or at least can make findings pitifully trivial. Consider the result—reported in the Dutch Daily News (August 30th, 2011)—that thinking about meat results in more selfish behavior. As this study has since been retracted—its author Stapel admitting that the data were fabricated—it is likely that this result would not have replicated. After (say) 50 replication attempts, what is the most parsimonious conclusion? One can either conclude that the effect does not truly exist or posit moderators. After enough replication attempts across multiple situations, the latter strategy will come down to specifying moderators such as “the date, time and experimenter,” none of which could be predicted on the basis of an “interesting” underlying theory.

This example is clearly an extreme one. The moderators proposed for the optimism bias and better-than-average effects are clearly more sensible and more general. It is still, however, the case that these moderators must be theoretically justified. If not, “moderators” may prop up a bias that does not exist, thus obscuring the true underlying explanation (much as in the toy example above). In a recent review of the literature, Shepperd, Klein, Waters, and Weinstein (2013) argue for the general ubiquitousness of unrealistic optimism defined as “a favorable difference between the risk estimate a person makes for him- or herself and the risk estimate suggested by a relevant, objective standard…Unrealistic optimism also includes comparing oneself to others in an unduly favorable manner,” but state that this definition makes “no assumption about why the difference exists. The difference may originate from motivational forces…or from cognitive sources, such as…egocentric thinking” (Shepperd et al., 2013, p. 396).

However, the question of why the difference exists is critical for understanding what is meant by the term unrealistic optimism especially in the presence of findings that clearly appear inconsistent with certain accounts. The finding that rare negative events invoke comparative optimism, while common negative events invoke comparative pessimism seems entirely inconsistent with a motivational account. If people are motivated to see their futures as “rosy,” why should this not be the case for common negative events (or rare positive events) (Chambers, Windschitl, & Suls, 2003; Kruger & Burrus, 2004)? One can say that comparative optimism is moderated by the interaction of event rarity and valence, such that for half the space of possible events pessimism is in fact observed, but would one really want to call this “unrealistic optimism” or an “optimistic bias”? Rather, it seems that a more appropriate explanation is that people focus overly on the self when making comparative judgments (e.g., Chambers et al., 2003; Kruger & Burrus, 2004; see Harris & Hahn, 2009 for an alternative account which can likewise predict this complete pattern of data)—a process that simply has the by-product of optimism under certain situations. It might be that such overfocus on the self gives rise to bias, but through a correct understanding of it one can better predict its implications. Likewise, one is in a better position to judge the potential costs of it.

In summary, when bias is understood in a statistical sense as a property of an expectation, demonstration of deviation across a range of values is essential to establishing the existence of a bias in the first place, let alone understanding its nature. Conflicting findings across a range of values (e.g., rare vs. common events in the case of optimism) suggest an initial misconception of the bias, and any search for moderators must take care to avoid perpetuating that misconception by—unjustifiedly—splitting up into distinct circumstances one common underlying phenomenon (i.e., one bias) which has different effects in different circumstances (for other examples, see on the better-than-average/worse-than-average effect, see e.g., Benoit & Dubra, 2011; Galesic, Olsson, & Rieskamp, 2012; Kruger, 1999; Kruger, Windschitl, Burrus, Fessel, & Chambers, 2008; Moore & Healy, 2008; Moore & Small, 2007; Roy, Liersch, & Broomell, 2013; on the false uniqueness/false consensus effect see Galesic, Olsson, & Rieskamp, 2013; more generally, see also, Hilbert, 2012).

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128002834000022

The Psychology of Learning and Motivation

Klaus Fiedler, in Psychology of Learning and Motivation, 2012

6.3 Sample-Size Neglect in Hypothesis Testing

One intriguing consequence of self-induced differences in sample size is confirmation bias in hypothesis testing. When asked to test the hypothesis that girls are superior in language and that boys are superior in science, teachers would engage in positive testing strategies (Klayman & Ha, 1987). They would mostly sample from targets that are the focus of hypothesis. As a consequence, smart girls in language and smart boys in science are rated more positively, due to enhanced sample size, than girls in science and boys in language whose equally high achievement is only visible in smaller samples.

The causal factor that drives this repeatedly demonstrated bias (cf. Fiedler et al., 2002b; Fiedler, Freytag, & Unkelbach, 2007; Fiedler, Walther, & Nickel, 1999) is in fact n, or myopia for n, rather than common gender stereotypes. Thus, if the hypothesis points in a stereotype-inconsistent direction, calling for a test of whether girls excel in science and boys in language, most participants would still engage in positive testing and solicit larger samples from, and provide more positive ratings of, girls in science and boys in language. Similarly, exposing participants to a stimulus series that entails negative testing (i.e., a small rate of observations about the hypothesis target), then a reversal is obtained. Reduced samples yield more regressive, less pronounced judgments (Fiedler et al., 1999), highlighting the causal role of n.

More generally, the MM approach offers an alternative account for a variety of so-called confirmation biases (Klayman & Ha, 1987; Nickerson, 1998). Hypothesis testers – in everyday life as in science – sample more observations about a focal hypothesis Hfocal than about alternative hypotheses Halt. Provided that at least some evidence can be found to support any hypothesis, the unequal n gives a learning advantage to Hfocal. No processing bias or motivated bias is necessary. If each observation has the same impact on memory, unequal n will bias subsequent judgments toward the focal hypothesis.

MM prevents judges from monitoring and controlling for n differences, which reflect their own information-search strategies. Meta-cognitively, they should ignore n for two reasons. First, if the task calls for estimations rather than choices, they should not engage in a Bayesian competition of whether Hfocal or Halt receives more support but rather try to provide unbiased estimations (e.g., of the confirmation rate for all hypotheses). In this case, the impact of n has to be discounted anyway. Second, even in a competitive hypothesis test or choice, the enhanced n in favor Hfocal does not imply enhanced diagnosticity if it reflects the judge's own search bias toward Hfocal, which creates stochastic dependencies in the sample.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123942937000017

DRUG-INDUCED INJURY, ACCIDENTAL AND IATROGENIC

A. Aggrawal, in Encyclopedia of Forensic and Legal Medicine, 2005

Confirmation Bias

Paradoxically, an experienced pharmacist is more likely to fall prey to this than a newcomer. Confirmation bias refers to a person's tendency to extrapolate what he/she has seen, without actually seeing.

Figure 3 presents an example of confirmation bias. Familiarity with the name of a book can make many readers extrapolate what they have seen, and be blind to an inherent mistake. The figure here shows a repetition of “of.” A chemist who has been dispensing heparin frequently (and is thus familiar with its name), would tend to read a prescription of Hespan (hetastarch, sodium chloride) as heparin. Cases have occurred where nurses have infused heparin when they should have infused Hespan.

What is it called when people seeking information tend to prefer sources that support rather than conflict with their existing opinions?

Figure 3. What is wrong here? Example of confirmation bias.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0123693993001397

Professional Issues

I. Leon Smith, Sandra Greenberg, in Comprehensive Clinical Psychology, 1998

(i)

Social cognition and perception (e.g., attribution theory and biases, information integration, confirmation bias, person perception, development of stereotypes, racism).

(ii)

Social interaction (e.g., interpersonal relationships, aggression, altruism, attraction).

(iii)

Group dynamics and organizational structures (e.g., school systems, gang behavior, family systems, group thinking, cultural behavior, conformity, compliance, obedience, persuasion) and social influences on individual functioning.

(iv)

Environmental/ecological psychology (e.g., person–environment fit, crowding, pollution, noise).

(v)

Theories of personality that describe behavior and the etiology of atypical behavior. Includes knowledge of limitations in existing theories for understanding the effect of diversity (e.g., age, ethnicity, gender).

(vi)

Multicultural and multiethnic diversity (e.g., racial/ethnic minorities, gender, age, disability, sexual orientation, religious groups, between- and within-group differences).

(vii)

Theories of identity development of multicultural/multiethnic groups (e.g., acculturation theories, racial/ethnic identity).

(viii)

Role that race, ethnicity, gender, sexual orientation, disability, and other cultural differences play in the psychosocial, political, and economic development of individuals/groups.

(ix)

Sexual orientation issues (e.g., sexual identity, gay/lesbian/bisexual, family issues).

(x)

Psychology of gender (e.g., psychology of women, psychology of men, gender identity development).

(xi)

Disability and rehabilitation issues (e.g., inclusion, psychological impact of disability).

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0080427073000389

Jury Psychology

B.E. Turvey, J.L. Freeman, in Encyclopedia of Human Behavior (Second Edition), 2012

Belief Perseverance

Belief perseverance is the tendency to prefer and shield personal or preexisting beliefs despite irrefutable evidence that they are incorrect. This is related to confirmation bias, which involves seeking out only those sources of information that support preexisting beliefs or theories and actively neglecting all contrary evidence or sources of information. This means that when someone has strongly held personal beliefs, it is likely that he/she will be immune to facts or evidence that disprove him/her. Jurors are no different, and it is therefore important for those beliefs to be revealed during the voir dire process when they are relevant to issues at trial.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123750006002160

Moral Foundations Theory

Jesse Graham, ... Peter H. Ditto, in Advances in Experimental Social Psychology, 2013

4.2.5 Criterion 5: Evolutionary model demonstrates adaptive advantage

Anti-nativists often criticize evolutionary psychology as a collection of “just-so” stories. And indeed, given the power of the human imagination and the epistemological predations of the confirmation bias, one could invent an evolutionary story for just about any candidate foundation, especially if one is allowed to appeal to the good of the group. But a good evolutionary theory will specify—often with rigorous mathematical models—exactly how a putative feature conferred an adaptive advantage upon individuals (or upon other bearers of the relevant genes), in comparison to members of the same group who lacked that feature. A good evolutionary theory will not casually attribute the adaptive advantage to the group (i.e., appeal to group selection) without a great deal of additional work, for example, showing that the feature confers a very strong advantage upon groups during intergroup competition while conferring only a small disadvantage upon the individual bearer of the trait (see Wilson, 2002; and see Haidt, 2012, chapter 9, on group-selection for groupish virtues). If no clear adaptive advantage can be shown, then that is a mark against foundationhood.

Another important safeguard against “just-so” thinking is to rely upon already-existing evolutionary theories. As we said in Section 1, MFT was inspired by the obvious match between the major evolutionary theories and the major moral phenomena reported by anthropologists. We engaged in no post hoc evolutionary theorizing ourselves. The fifth row of Table 2.4 shows evolutionary theories that spell out the adaptive advantages of certain innate mechanisms which we posit to be among the modules comprising each foundation. For example, the fairness foundation is largely just an elaboration of the psychology described by Trivers (1971) as the evolved psychological mechanisms that motivate people to play “tit for tat.”

In sum, we have offered five criteria for foundationhood. Any moral ability, sensitivity, or tendency that a researcher wants to propose as an expression of an additional moral foundation should meet these criteria. At that point, the researcher will have established that there is something innate and foundational about an aspect of human morality. The only hurdle left to clear to get added to the list of moral foundations is to show that the candidate foundation is distinct from the existing foundations. For example, we do not believe that there is an “equality” foundation, not because we think there is nothing innate about equality, but because we think that equality is already accounted for by our existing foundations. Equality in the distribution of goods and rewards is (we believe) related to the Fairness foundation. Equality is a special case of equity: when all parties contributed equally, then all parties should share equally (Walster, Walster, & Berscheid, 1978). People who take more than their share are cheating. Moral judgments related to political equality—particularly the anger at bullies and dominators who oppress others—may be an expression of the candidate Liberty/oppression foundation. (See Haidt, 2012, chapter 8, for further discussion of equality, equity, and liberty.)

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124072367000024

What is it called when I filter out information that doesn't support my opinion and I only focus on information that supports my opinion?

Confirmation bias occurs when people filter out facts and opinions that don't coincide with their preconceived notions.

What is the term for when a person only seeks information that affirms their own viewpoint?

Confirmation bias, a phrase coined by English psychologist Peter Wason, is the tendency of people to favor information that confirms or strengthens their beliefs or values and is difficult to dislodge once affirmed. Confirmation bias is an example of a cognitive bias.

What is information seeking bias?

Information bias is a cognitive bias to seek information when it does not affect action. People can often make better predictions or choices with less information: more information is not always better.

What are the 3 types of bias?

Three types of bias can be distinguished: information bias, selection bias, and confounding. These three types of bias and their potential solutions are discussed using various examples.