In Part 1, I wrote:
The idea of this first mechanism is simplicity itself: It's very hard for anyone to see anything wrong with whatever they identify with most. OTOH, it's surprisingly easy to think the worst of whatever one happens to despise.
This mechanism is particularly noteworthy because it has to do with where people start out without necessarily paying any particular attention to an issue. All three other mechanisms we will consider have to do with how attitudes change. Consequently, it only makes sense to spend the most time on this first mechanism.
In Part 1, I then went on to quote from Kahan's paper:
First is identity-protective cognition. It is much less costly to one's sense of self to believe that behavior one thinks is noble is also societally beneficial, and behavior one things base is also societally harmful, than vice versa. This is particularly so where one's valuation of the behavior is connected to one's self-defining group commitments. Group membership supplies individuals not only with material benefits but a range of important nonmaterial ones, including opportunities to acquire status and self-esteem. Forming beliefs at odds with those held by mem-bers of an identity-defining group can thus undermine a person's well-being-either by threaten-ing to drive a wedge between that person and other group members, by interfering with important practices within the group, or by impugning the social competence (and thus the esteem-conferring capacity) of a group generally. Accordingly, individuals are motivated, subconsciously, to conform their perceptions of risk to ones that are dominant within their self-defining reference groups.
Kahan then goes on to discuss the "white male effect"--research that dealt with at length in another paper, "Culture and Identity-Protective Cognition: Explaining the White Male Effect in Risk Perception", which Kahan co-authored with four others.
The "white male effect" refers to the tendency of white males to regard all manner of societal risk as smaller in magnitude and seriousness than do women and minorities.
Put simply, white men run things in America, and thus are much less interested in hearing about how things may be screwed up. Of course, not all white men are the same. But precisely because they do run things, those who score high on hierarchy or individualism are likely to be especially invested in things as they are. Thus identity protective cognition should be expected to play a particularly strong role in influencing them. This is, of course, quite contrary to their own self-image of superior rationality and freedom from emotion and special interest bias.
As a consequence, Kahan writes:
Consider environmental risk perceptions. Hierarchists are disposed to dismiss claims of environmental risks because those claims implicitly cast blame on societal elites. But white male hierarchists, who acquire status within their way of life by occupying positions of authority within industry and the government, have even more of a stake in resisting these risk claims than do hierarchical women, who acquire status mainly by mastering domestic roles, such as mother and homemaker. In addition, white hierarchical males are likely to display this effect in the most dramatic fashion because of the correlation between being a minority and being an egalitarian.
Along with environmental risks, the study of the white male effect also looked at gun risks and abortion. The following chart shows how race and gender identity affected attitudes towards all three types of risk, with white males consistently perceiving the least risk:
Figure 2: "White Male Effect" on Risk Perceptions, from "Culture and Identity-Protective Cognition: Explaining the White Male Effect in Risk Perception"
Switching lenses, to look at worldview effects produced the following picture:
Figure 3: Cultural Worldview Effect on Risk Perceptions, from "Culture and Identity-Protective Cognition: Explaining the White Male Effect in Risk Perception"
In the "White Male" paper, the authors wrote:
As expected, persons who held relative hierarchical and individualistic outlooks-and particularly both simultaneously-were the least concerned about environmental risks and gun risks, while persons who held relatively egalitarian and communitarian views were most concerned. With regard to abortion risks, in contrast, persons who were both relatively hierarchical and communitarian in their views were most concerned; individuals who had an egalitarian outlook, particularly those who qualified as Egalitarian Individualists, were least worried about the risk of abortion for women's health. This pattern, too, conformed to the anticipated influence of group-grid cultural dispositions.
Combining the two lenses produced the following chart. White males always perceived lower risks, but the degree to which they perceived lower risks varied considerably by worldview and by issue:
Figure 4: Size of "White Male Effect" on Risk Perception Across Cultural Groups, from "Culture and Identity-Protective Cognition: Explaining the White Male Effect in Risk Perception"
Regarding this chart, the authors wrote:
These patterns are suggestive of the hypothesized interaction of the white male effect with culture-specific forms of identity-protective cognition. But for definitive testing, it is necessary to disentangle the influences of demographic characteristics and cultural outlooks through multivariate regression analyses.
They then went on to produce the results of of these analyses for all three types of threats, which confirmed the hypothesis. I don't want to get too deep into the details here, and lose sight of the big picture, but one part of their discussion is worth quoting for what it tells us of the relative strength of different influences. The following is from the discussion of different regression models in evaluating gun risks. As is standard practice, the first model tests the smallest number of variables, while each successive model adds more variables to the list. In this case, Model 2 tested gender, race, age, income. education, a taste for risk-taking, community type (urban/suburban/rural) and religion. Model 3 added the group/grid worldviews, with striking results:
Together the worldview measures increased the explanatory power of Model 2 by over 50%. Hierarchy and Individualism have the first and second largest effect sizes, respectively, of all the independent
variables. When combined, they explain almost 5 times as much variance as gender, 34 times as much as education, and 17 times as much as residing in a rural environment. They explained 20 times as much as party affiliation and ideology when combined, and 10 times as much as the religious affiliation variables when combined. Again, the results strongly supported the hypothesis that cultural worldviews exert a strong identity-protective influence on cognition.
It's particularly worth noting that the worldviews explain "17 times as much as residing in a rural environment." Considering how regularly we hear about gun ownership & attidtudes in rural America, it's certainly worth observing how much more important worldviews are.
Biased Assimilation and Group Polarization
In Part 1, I wrote:
The second mechanism is also simple: we hear what we want, and pay little or no attention to whatever we don't want to hear. As a result, a polarized audience listening to a balanced presentation will come away even more polarized, because each faction hears only that which reinforces their views.
And I went on to quote from Kahan's paper:
Now let's consider biased assimilation and group polarization. This dynamic reflects the role of values on the processing of information. Because individuals are subconsciously motivated to persist in their beliefs, they attend to evidence and arguments in a selective fashion, crediting information that reinforces their beliefs and dismissing as noncredible information that undermines them. As a result, when individuals are exposed to balanced information supporting and challenging their existing beliefs, they become even more extremely committed to their priors. By same token, when groups of individuals holding opposing beliefs are exposed to balanced information, they don't converge in their views; they polarize.24 We hypothesized that individuals subscribing to competing ways of life would exhibit biased assimilation and polarization when exposed to balanced, competing arguments on a risk they were culturally predisposed to credit or discount.
Here's an example of what that looks like in a relatively rare example where one has a sample that starts off with little or no information as starting condition, making a "before" and "after" comparison particularly straightforward:
The risks in this case were those posed by nanotechonlogy--a subject that most Americans know virtually nothing about. As Kahan explained in the accompanying text:
One study we've done to test this hypothesis focused on nanotechnology risks.25 Nanotechnology involves the creation and manipulation of extremely small materials, on the scale of atoms or molecules, which behave in ways very different from larger versions of the same materials. It's a novel science; about 80% of the American public say they have either never heard of it, or have heard only a little. We did an experiment in which we compared the nanotechnology risk perceptions of subjects to whom we supplied balanced information risk-benefit information to subjects to whom we supplied no information.
The results confirmed our hypothesis about biased assimilation and polarization. In the no-information condition, individuals of opposing cultural worldviews held relatively uniform risk perceptions. That's not surprising, since the vast majority of them had never heard of nanotechnology. In the information condition, however, hierarchs and egalitarians, and individu-alists and communitarians, all formed opposing views. In other words, individuals holding these worldviews attended to the balanced information on nanotechnology in a selective fashion that reinforced their cultural predispositions toward environmental and technological risks generally. As a result, they polarized.
Cultural Credibility Heuristic
In Part 1, I wrote:
The third mechanism is also simple: people trust who they trust. Not who is logical, rational or reasonable to trust.
And went on to quote Kahan:
Next is the cultural credibility heuristic. Most people (in fact, all, if one thinks about it) cannot determine for themselves just how large a disputed risk is, whether of environmental catastrophe from global warming, of human illness from consumption of genetically modified foods, of accidental shootings from gun ownership, etc. They must defer to those whom they find credible to tell them which risk claims and supporting evidence to believe and which to disbelieve. The cultural credibility heuristic refers to the hypothesized tendency of individuals to impute to experts whom they perceive as sharing their values the sorts of qualities--including knowledge, honesty, and shared interest--that make their positions on risk worthy of being credited.
To test this, Kahan and his colleagues constructed composite fake experts embodying each of the four ideal types & presented arguments attributed to them, and contrasted the results with other protocols. The risk subject in this case was HPV vaccine:
We studied the HPV-vaccine risk perceptions of 1,500 Americans. The sample was divided into three groups. One was supplied no information about the HPV vaccine. Another was furnished balanced information in the form of opposing arguments on whether its benefits out-weighed its risks.
The final group was exposed to the same arguments, which in this treatment were attributed to fictional, culturally identifiable experts, who were described as being on the faculties of major universities.... We ended up with four culturally identifiable policy experts whose perceived cultural values located them in the quadrants defined by the intersection of group and grid.
Thus, in the first case, the results were due to the identity-protective-cognition effect. In the second case, they were due to the biased assimilation and polarization effect. In the third case, they were due to the cultural credibility heuristic, either in the expected direction or contrary to it. As seen below, when the experts weighed in according to expectations, the cultural credibility heuristic was modestly stronger than the biased assimilation and polarization effect, in the same direction. But when the experts weighed in counter to expectations, the effect was even stronger in the opposite direction, which can be seen by comparing the third and fourth dots in each of the four charts below--the first two showing hierarchs vs. egalitarians on the left, and their differences on the right, the second two showing individualists vs. communitarians on the left, and their differences on the right:
[Click to enlarge in new window]
In Part 1, I wrote:
The fourth and final mechanism is also simple: Expanding on the precept that you catch more flies with honey than vinegar, you make people more receptive to potentially threatening information if you first boost their sense of self-worth, so that they are inherently more impervious to threat.
And I quoted Kahan as follows:
The next mechanism, cultural-identity affirmation, also can be seen as a type of "cultural debiasing" strategy. This one is based on self-affirmation, a mechanism which is essentially the mirror image of identity-protective cognition and which has been extensively documented by Geoff Cohen, one of the Cultural Cognition Project members.27 Identity-protective cognition posits that individuals react dismissively to information that is discordant with their values as a type of identity self-defense mechanism. With self-affirmation, individuals experience a stimulus--perhaps being told they scored high on a test, or being required to write a short essay on their best attributes--that makes a worthy trait of theirs salient to them. This affirming experience creates a boost in a person's self-worth and self-esteem that essentially buffers the sense of threat he or she would otherwise experience with confronted with information that challenges beliefs dominant within an important reference group. As a result, individuals react with in a more open-minded way to potentially identity-threatening information, and often experience a durable change in their prior beliefs.
Cultural-identity affirmation hypothesizes that you can get the same effect whey you communicate information about risk in a way that affirms rather than threatens their cultural worldview.28
The example Kahan investigated to explore this mechanism related directly to global warming denialism. Two articles were presented to test subjects, both advancing the threat of global warming, but in one case saying that a panel of scientists recommended an "anti-pollution" solution, while in the other it said that the panel recommended a nuclear power solution. The results were striking:
Note that what we're talking about here--as we are throughout this entire series--is simply the perception of risk. This says nothing about people's ideas about what should be done about those risks. In particular, the biggest problems with nuclear power are the weapons spill-over (dirty bombs as well as nuclear devices) and the total lack of life-cycle safety, meaning that nuclear waste could continue to be dangerous for tens of thousands of years--a time-frame at least an order of magnitude longer than that of any human civilization. These entail risks that get us into two entirely different cans of worms.
In Part 3 of this series, we attempt to take all the above into account, and ask what they tell us about global warming denialism and how to combat it. This includes--but is not limited to---a consideration of what Kahan and his colleagues have to say on the subject.