Public Knowledge, Scientific Literacy, Numeracy, and Perceptions of Climate Change
Summary and Keywords
It is a widely accepted scientific fact that our climate is changing and that this change is caused by human activity. Despite the scientific consensus, many individuals in the United States fail to grasp the extent of the consensus and continue to deny both the existence and cause of climate change; the proportion of the population holding these beliefs has been stable in recent history. Most of the American public also believe they know a lot about climate change although knowledge tests do not always reflect their positive perceptions. There are two frequent hypotheses about public knowledge and climate change beliefs: (a) providing the public with more climate science information, thus making them more knowledgeable, will bring the beliefs of the public closer to those of climate scientists and (b) individuals with greater cognitive ability (e.g., scientific literacy or numeracy) will have climate change beliefs more like those of experts. However, data do not always support this proposed link between knowledge, ability, and beliefs. A better predictor of beliefs in the United States is political identity. For example, compared to liberals, conservatives consistently perceive less risk from climate change and, perhaps as a result, are less likely to hold scientifically accurate climate change beliefs, regardless of their cognitive abilities. And greater knowledge and ability, rather than being related to more accurate climate change beliefs, tend to relate to increased polarization across political identities, such that the difference in beliefs between conservatives and liberals with high cognitive ability is greater than the difference in beliefs between conservatives and liberals with low cognitive ability.
It is a widely accepted scientific fact that climate change is occurring, and if current global warming patterns continue, severe negative consequences will occur (Smith et al., 2009). The Earth may see increases in extreme weather events such as heat waves, floods, and wildfires. Irreversible loss of vulnerable plant and animal species and ecosystems may occur. Large-scale monetary losses, as well as loss of human life may accompany these changes. These losses will most likely have a disproportionate effect on already disadvantaged populations. Given this information, it is not unreasonable to think that most, if not all, individuals would support attempts to reduce both causes and consequences of climate change. However, in the United States, widespread conflict exists regarding the validity of scientific claims about climate change.
In the face of substantially supported scientific data, why do Americans continue to disregard the consensus of the scientific community and argue in opposition to policies that would mitigate the negative impacts of climate change? One hypothesis is the deficit model, which claims that much of the public does not have the skills or knowledge necessary to fully comprehend climate science information (e.g., Bauer et al., 2007; Miller, 1983). With increased ability and knowledge, it is hypothesized, would come more accurate beliefs, relative to expert opinion. However, others hypothesize that an individual’s values and beliefs, rather than abilities, are more predictive of how an individual interprets scientific information (e.g., Kahan et al., 2012). This article will focus on how climate change beliefs differ across the American public, specifically looking at how scientific knowledge, climate change knowledge, and numeracy might play a role in belief development and persistence. Additionally, this article examines the ways in which values, such as political ideologies and worldviews, might affect these relationships.
Climate Change Beliefs
A variety of climate change beliefs exist and differ among Americans. Some important measures of belief focus on whether or not it exists, whether or not there is scientific agreement on its existence, the extent of concern or worry about climate change, and what the causes of it are if it does exist (e.g., is it anthropogenic, i.e., human-caused?). This section focuses on how these beliefs are measured, how they have changed over time, and how the manner in which corrective information is presented affects people.
Climate Change Beliefs over Time in the United States
Do People Believe That Our Climate Is Changing?
Beliefs in the existence of climate change have remained relatively constant over the last five years (Figure 1). In 2015, 67%, 18%, and 19% of respondents indicated that global warming was happening, not happening, or that they were unsure, respectively. The two groups giving a response other than “unsure” were similarly confident in their beliefs, with just over half of each group indicating they were “extremely” or “very” sure of their position (Leiserowitz et al., 2015). Additionally, these beliefs varied by geographic region within the United States; in some counties as few as 43% (Trimble County, Kentucky) of residents indicated belief in climate change whereas as much as 80% did in other counties (New York County, New York) (Howe et al., 2015).
Beliefs in Scientific Consensus
Based on a study analyzing 11,944 peer-reviewed climate science articles, the rate of scientific consensus on human-caused climate change currently lies at 97% (Cook et al., 2013). However, individuals tend to under-estimate this consensus, with one study finding that the average perceived consensus rating was 67% (van der Linden et al., 2015). In 2015, a different survey asking participants “what percentage of climate scientists think that human-caused global warming is happening?” found that only 12% of Americans correctly rated the extent of this scientific consensus as greater than 90% (Leiserowitz et al., 2015). Additionally, only half of this sample indicated that more than 50% of scientists agree that human-caused global warming is happening. This underestimation has been found consistently in surveys since 2008.
Worry and Concern about Climate Change
Over the last 15 years, general concern about climate change and its possible effects has fluctuated somewhat, with the percent of people who say they worry “a great deal” or “a fair amount” about global warming moving from a low of 51% in 2004 to a high of 66% in 2008. After 2008, the percentage of people with this concern dropped to the mid-fifties and remained relatively constant for several years. In a 2016 Gallup survey, 64% of the U.S. public indicated this level of worry (Gallup, 2016, figure 1). Additionally, climate-change concern seems to vary alongside other types of beliefs, with people holding greater beliefs that climate change is happening and that it is human caused also tending to be the most concerned compared to those who do not believe our climate is changing who are the least concerned (Maibach et al., 2011).
It is interesting to note the decline in worry about climate change in 2008. Mariconti (2011) hypothesized a relationship between this decline in worry and the 2008 U.S. economic recession. He suggested that individuals were preoccupied with worry about the economy, thus reducing worry about climate change. This hypothesis is consistent with individuals having a “finite pool of worry” (Marx et al., 2007) such that an individual’s amount of worry for one risk is negatively correlated with the amount of worry for another risk.
Beliefs about the Causes of Climate Change
In March 2016, when choosing among potential causes of global warming in response to the prompt, “And from what you have heard or read, do you believe increases in the Earth’s temperature over the last century are due more to . . .,” 65% of respondents indicated that it is human-caused, while 31% of respondents indicated that it is naturally caused (Gallup, 2016). These responses have varied over the course of the last 15 years, though 2016’s result is the highest in the last 6 years, with a low of 50% indicating a belief that climate change is human-caused in 2010 (Figure 1).
Do Beliefs About Climate Change Matter?
Scientific consensus beliefs may be important as a precursor to other climate change beliefs according to van der Linden et al.’s (2015) “gateway belief” model. Correlational studies have found that higher scientific consensus beliefs were related to higher certainty that climate change exists and that it is human-caused (e.g., Ding et al., 2011; McCright & Dunlap, 2011). These studies also found that beliefs that climate change exists and that it is human-caused mediated the relation between scientific consensus beliefs and climate-change policy support. Based on these data, the researchers suggested that increases in perceived scientific consensus may lead to increases in other climate change beliefs, which in turn may lead to increased support for climate change-mitigating policies. In addition, at least one study has found experimental evidence for scientific consensus beliefs as a causal precursor to other climate change beliefs (van der Linden et al., 2015). In this study, participants’ estimates of the scientific consensus and other climate change-related beliefs were assessed before and after receiving information about the actual scientific consensus (97%). Results showed that receiving information about the scientific consensus significantly increased individuals’ estimates of that consensus (67% pre-test vs. 80% post-test), which, in turn, significantly increased other beliefs about climate change pre- to post-test, including beliefs that the climate is changing and that it is human-caused; worry about climate change also increased (van der Linden et al., 2015).
Other studies have examined other beliefs about climate change and their effects on behaviors. In one study, the perceived costs and benefits of performing various types of climate-friendly behaviors were the strongest predictors of displaying those same climate-friendly behaviors (Tobler et al., 2012a). However, concern about climate change and skepticism were also important. Individuals who were more concerned about climate change were also more likely to display low-cost climate-friendly behaviors, such as saving electricity, compared to individuals who are less concerned. Additionally, individuals who were more skeptical that climate change exists were less likely to display indirect climate friendly-behaviors such as voting for pro-environmental politicians and donating money to pro-environmental organizations, compared to individuals who are less skeptical. Milfont (2012) also demonstrated that increased concern about climate change related to increased perceived self-efficacy. Specifically, more concerned respondents were more likely to think they had the ability to influence the outcomes of global warming than less concerned respondents. These results support the notion that beliefs about climate change have an impact on pro-environmental behavior.
Can Climate Change Beliefs Be Changed?
Researchers have recently turned to the question of whether climate change beliefs can be changed. Information presentation formats may matter to the public’s comprehension and use of information about climate change as such formats matter in other domains (e.g., Garcia-Retamero & Cokely, 2013; Peters et al., 2014a). In one study, highlighting the scientific consensus about climate change using a graph and text increased perceptions of the consensus (Lewandowsky et al., 2012). In particular, individuals who received information about the scientific consensus on human-caused climate change rated the consensus as higher (88%) than individuals who did not receive this information (67%).
Myers and her colleagues (2015) further tested information presentation techniques that might best communicate the real consensus of 97%. This study found first that providing numeric (e.g., 97%) versus nonnumeric (e.g., “an overwhelming majority”) information about the level of scientific consensus regarding human-caused climate change resulted in differences in estimated scientific agreement. Those provided numeric information rated the agreement higher (roughly 77%) than those provided nonnumeric information (62%). Additionally, they found that the specificity of the numeric information also affected ratings of scientific agreement, such that more specific information (e.g., 97.5%) resulted in higher ratings (78% consensus) than less specific information (e.g. more than 9 out of 10, 68% consensus). Providing simple messages can increase scientific consensus estimates (see also van der Linden et al., 2014).
Even Myers et al.’s (2015) most effective message, however, did not result in consensus estimates equal to the actual level of consensus. Instead, individuals appeared to update their beliefs rather than replace them with this new information. For instance, an individual may initially believe that the consensus is about 65%. When told that it is actually 97.5%, that individual may be unwilling to complete change their original hypothesis, instead settling for a number somewhere in the middle (e.g., Hogarth & Hillel, 1992). It is thus potentially important for individuals to hear messages about climate change multiple times, potentially from multiple sources, in order to arrive at accurate beliefs. Additionally, more research is needed on how to present data about climate change in ways that allow the public to better understand and use it in decisions.
Some researchers, however, disagree that consensus messages like those above help to increase beliefs about global warming. Using nationally representative survey data, Kahan (2015) analyzed changes in beliefs about human-caused climate change in relation to several studies confirming the high rate of consensus among scientists regarding climate change over the last decade. He found that individuals’ beliefs over time have not increased following the release of any consensus studies, even though most of these studies were highly publicized. Kahan hypothesized that the difference between these results and the many studies finding increases in beliefs post-consensus messaging is the presence of a highly controlled experimental setting. The consensus studies have been performed in such highly controlled settings; as a result, they lack external validity and, according to Kahan, may not represent how people interact with consensus messaging in less controlled settings of the real world.
Kahan (2015) further questioned the framing of questions like those reviewed in this section and suggested that framing substantially affects responses in ways that undermine the “gateway belief” model from van der Linden et al. (2015). For instance, consider the following true statements: (a) “The globally averaged surface air temperatures were higher for the first decade of the 21st century (2000–2009) than for the last decade of the 20th century (1990–1999)” and (b) “Climate scientists believe that the globally averaged surface air temperatures were higher . . .” In his study, when responding to the first statement, the answer (true or false) depended upon political identity. However, when asked the second statement, responses no longer diverged along party lines. He concluded that only the second statement assessed climate knowledge, whereas the first statement was more reflective of personal feelings and beliefs regarding climate change rather than their beliefs about scientific knowledge per se. As a result, it is possible that data supporting the “gateway beliefs” model simply reflect personal beliefs across a variety of questions, some of which look like but (according to Kahan) are not knowledge questions. Thus, more left-leaning respondents hold one set of internally consistent beliefs that include items that look like knowledge items, whereas more right-leaning respondents hold a different set of internally consistent beliefs. This view is not entirely consistent with the data, however; the scientific consensus questions that form the basis of the “gateway belief” model are more like statement (b) above and specifically concern what scientists believe. As a result and consistent with Kahan (2015), responses to them do not diverge based on political identity.
More research is needed to understand whether studies in controlled settings replicate what happens in the real world and, if not, why not. What is known as the “deficit model” is incorrect; it is certainly inadequate to simply throw more information at the public to clarify some lack of understanding or misunderstanding. Less well known at this point, however, is whether carefully choosing and consistently using evidence-based approaches to presenting information may make a difference to public beliefs and counter Kahan’s points. Nonetheless, a focus on knowledge alone will not be enough in situations where values are fundamental. Future research is needed to determine whether highly politicized scientific beliefs, such as those regarding climate change, can be changed in the real world using evidence-based messaging techniques that, for example, reduce cognitive effort, draw attention to the most critical information, and help individuals understand the meaning of the data. The politicization of climate change and its potential impact on climate change beliefs are discussed further later in this article.
Knowledge and Ability
At times, experts appear to believe that the public is scientifically illiterate and innumerate, without the ability to handle technical facts. Presumably as a result, the public relies on simple heuristics (mental shortcuts) to determine what they believe and how they should decide on a variety of issues including climate change (Kahan et al., 2012; Slovic, 1987, 1999; Fischhoff, 2013). Historically, this lack of understanding has been termed a “public deficit” of knowledge (Bauer et al., 2007), such that an inability to properly evaluate scientific evidence is linked with an inability to develop informed opinions based on the results of scientific studies (Miller, 1983). Individuals do differ considerably in how much they know about science in general and climate change in particular. This section explores how the American public responds on measures of scientific knowledge, including specific climate change knowledge and scientific literacy as well as numeracy (numeric literacy). It also focuses on how numeracy skills, in particular, are associated with judgment and decision-making processes, including risk perceptions and why they may be important in climate change, because they are rarely considered. Specific ways in which these knowledge and ability indicators relate to climate change are also discussed.
Climate Change Knowledge
Climate change knowledge assesses how much people know or think they know about the science behind climate change. Such knowledge may be important to climate-change belief development because increased understanding about the science may lead individuals to come to scientist-consistent conclusions.
Subjective Climate Change Knowledge
Subjective measures of climate change knowledge are self-reported assessments of how much an individual thinks he or she knows about climate change. In a 2015 Gallup survey, 77% of Americans said they were “very well informed” or “fairly well informed” about climate science, when asked “. . . how well do you feel you understand [global warming]?” (Gallup, 2015). This number has increased in a relatively consistent linear manner over the last few decades, from a low of 53% in 1992 to a high of 84% in 2014.
Objective Measures of Knowledge
Objective indicators of climate-change knowledge also exist and attempt to capture what individuals understand about how the climate system works. These measures can provide communication experts with a baseline of what people know and do not know. Experts can then use this information to figure out what needs to be communicated to the public to increase their understanding of climate science (Fischhoff, 2013). One of these objective indicators asks “. . . which one of the five [models] best represents your understanding of how the climate system works?” Individuals then must choose among the following five options: (a) the “gradual” model, which suggests global warming is a slow process that will eventually produce dangerous effects; (b) the “fragile” model, which suggests that small changes in climate can produce immediate dangerous effects; (c) the “stable” model, which suggests that global warming will not dangerously affect the Earth’s climate; (d) the “random” model, which suggests that the Earth’s climate cannot be predicted; and (e) the “threshold” model, which suggests climate stability within a specific range, though global warming can produce dangerous effects if changes become too large (Leiserowitz et al., 2010). The most accepted model of the Earth’s climate system is the threshold model (Schwartz & Randall, 2003). In 2010, approximately 34% of Americans surveyed correctly chose this model to describe the current state of our climate system (Leiserowitz et al., 2010). A smaller percentage of individuals chose the gradual (24%), fragile (11%), stable (10%), or random (21%) models, indicating that many Americans do not fully understand how our climate system works and how changes might affect it.
In addition to describing the climate system, individuals can also indicate their climate knowledge through their understanding of how global warming works and how it might be mitigated. In 2010 (Leiserowitz et al., 2010), approximately 57% of Americans indicated that they both knew of the greenhouse effect (the idea that gases in the atmosphere trap heat) and could correctly define it. In terms of mitigating global warming, most Americans (75%) could correctly identify that using renewable energy sources instead of fossil fuels could reduce future global warming. However, many Americans also incorrectly said that reducing toxic waste, banning aerosol spray cans, or minimizing holes in the ozone layer could also reduce global warming (67%, 69%, and 43%, respectively). Furthermore, 42% of Americans believe that because science has such a difficult time predicting weather patterns just days in advance, science is also unable to predict future climate change.
It is important to note a potential flaw in these climate-change knowledge measures. Specifically, some of them may be entangled with political ideology, similar to what Kahan (2015) found for some climate change belief measures. If this entanglement exists, it is possible that connections between climate-change knowledge and beliefs reflect political ideology rather than any causal relation between these two variables. As a result, researchers should consider how to construct knowledge questions that are free of political ideology in order to parse out possible unique effects of climate change knowledge on climate change beliefs.
It is possible that general scientific knowledge can influence the degree to which an individual is able to understand scientific facts about climate change and subsequent beliefs about climate change. This general understanding of science may reflect greater background scientific knowledge, more appreciation for science, and/or a willingness to accept scientific findings. As a result, individuals with higher general science knowledge may be more motivated and able to learn about scientific findings in the climate change domain.
One measure of scientific knowledge is scientific literacy. Measures of scientific literacy were first developed in response to what has previously been referred to as the “public deficit” of knowledge. They were meant to assess improvements in that deficit as educators sought to increase levels of scientific literacy throughout the general public to produce a more intelligent population that could more readily interact with and understand the scientific community (Miller, 1983). Historically, scientific literacy has been assessed using three distinct measures: (a) a factual knowledge scale, (b) an understanding of scientific inquiry scale, and (c) an ability to distinguish pseudoscience from science scale (NSF science indicators, 2016). These scales were proposed by Miller (1983) in the mid-1980s and have been developed over the past several decades into the scales currently in use today. Because these measures have been used in national surveys over the course of the last several decades, they provide insight into how scientific literacy levels are changing as well as a snapshot into what people know right now.
The factual knowledge measure asks a series of true-or-false questions about known physical and biological science phenomena. For example, individuals read statements such as “the center of the Earth is very hot,” (true) or “all radioactivity is man-made,” (false) and then mark that statement as true or false. Individuals are scored based on the total number of statements they marked correctly. Total scores on this measure have not changed much over the last several decades (Figure 2), although scores on individual items have varied. In 2014, the average American answered 65% of the scientific fact questions correctly. Scores tend to vary across gender, education level, and number of science and mathematics courses, with higher scores obtained by males, people with more education, and people who have taken more science and math courses. Though regularly used, critics of this scale claim it overemphasizes the importance of “textbook knowledge” of scientific ideas and further suggest that it may be irrelevant to an individual’s capacity to comprehend new scientific information (Bauer et al., 2007).
The scientific inquiry measure assesses the degree to which individuals understand how science generates and assesses evidence using probability, experiments, and the scientific method, making it a better candidate for predicting comprehension of new scientific information. For example, the probability questions provide background information about a couple who is told they have a 1 in 4 chance of having a child with an inherited illness. Then, they must answer the following questions: (a) “Does this mean that if the first child has the illness, the next three will not have the illness?” (No), and (b) “Does this mean that each of the couple’s children will have the same risk of suffering from the illness?” (Yes). Sixty-six percent of respondents answered both questions correctly. Additionally, 53% of respondents displayed knowledge of experiments by correctly describing the benefits of using a control group to test a hypothesis. However, only 26% of individuals displayed understanding of scientific study by sufficiently describing the scientific process using terms like “theories and hypotheses” or “rigorous/systematic comparison” (NSF Science indicators, 2016). If an individual correctly responds to the probability questions and either the experiment or scientific methods question, they are classified as “understanding scientific inquiry.” The proportion of Americans who attain this level of understanding has varied greatly over the last 15 years, with a high of 46% of individuals in 2014 and a low of 32% of individuals in 1999. Although this trend could mean a general improvement in performance, the wide variation in scores does not support this claim (Figure 2). Understanding the process of scientific inquiry appears to be important: For individuals to be able to identify credible scientific findings, they must be able to identify when appropriate scientific methods are used.
Building from the ideas contained within the scientific inquiry measure (i.e., the ability to identify when scientific methods are used), the pseudoscience measure assesses the degree to which an individual can distinguish between pseudoscience and science. For example, individuals rated nonscientific areas of study such as extrasensory perception (ESP), magnetic therapy, and astrology as either “not at all scientific,” “sort of scientific,” or “very scientific.” In 2014, 65% of respondents said astrology was “not at all scientific,” indicating an ability to distinguish between science and pseudoscience. This is one of the highest percentages recorded in the last several decades, with the previous high recorded in 2004 (66%) and the lowest percentage in 1979 (50%) (NSF Science Indicators, 2016; see Figure 2).
Overall, scientific literacy appears to be slightly increasing, but survey results imply a vast number of Americans are still unable to understand basic scientific terms and methods. The importance of this deficit is discussed later in this article.
Being able to understand and use numbers may be important with respect to climate change. Individuals are often provided an onslaught of numeric information that they must use to form accurate perceptions of climate-change causes and risks. Numbers also can be used as goals and measures of progress in climate change. For example, the 2015 climate conference in Paris set a goal of holding the increase in the global average temperature to 2 °C, or 3.6 °F, above preindustrial levels (Sellers, 2016). Although such data are provided to instruct, inform, and give meaning to information about climate change, not all Americans may be sufficiently able to understand or use this information. This inability may be important because climate change data are unfamiliar to many individuals, and, thus, those individuals may be unable to make sense of what provided numeric information means in climate change contexts.
Objective numeracy is defined as the ability to understand and use probabilistic and other mathematical concepts. Several scales have been developed to properly measure this ability. One early scale was initially used to measure how well medical patients could understand and use risk information regarding mammography screenings (Schwartz et al., 1997). This scale included three items. One of the more difficult items asked, “In the ACME publishing sweepstakes, the chance of winning a car is 1 in 1,000. What percent of tickets to ACME publishing sweepstakes win a car? (Answer: 0.1%).” In the original study, only approximately 20% of participants correctly responded. The Lipkus scale (Lipkus et al., 2001) expanded the original three-item scale to include eight additional items. This scale assesses how well people can use percentages, proportions, and probabilities, and how well they can convert numbers from one of these formats to another. One of the simpler questions asks, “If the chance of getting a disease is 20 out of 100, this would be the same as having a _____% chance of getting the disease (Answer: 20%).” In educated samples, roughly 70% of participants correctly respond. Weller et al. (2013) took items from other scales, including the Lipkus, and developed a more predictive, Rasch-based numeracy scale that covers a broader range of abilities. The scale consists of eight items, including items previously presented from the Schwartz et al. (1997) and Lipkus et al. (2001) scales, as well as two CRT (Cognitive Reflection Test) items from Frederick (2005). For example, one question asks, “A bat and a ball cost $1.10. The bat costs $1.00 more than the ball. How much does the ball cost? (Answer, $0.05).” For this question, the intuitive answer is $0.10. However, if an individual were to think more deeply about the question and perform a calculation, they would come to the correct answer ($0.05). Roughly 15%–35% of people get this question correct, making it one of the more difficult questions (Weller et al., 2013).
Subjective numeracy is defined as an individual’s perception of their own numeric ability and preferences (Peters & Bjalkebring, 2015). One eight-item scale uses four items to measure individual beliefs about their ability to perform mathematical operations and four items to measure individual preferences for presentation of numeric information (Fagerlin et al., 2007). One ability item asks, “How good are you at working with fractions?” and respondents rate themselves on a scale from 1 (not at all good) to 6 (extremely good). The mean response was 3.67 in the original sample, which is consistent across many other studies. One preference item asks, “When reading the newspaper, how helpful do you find tables and graphs that are parts of a story? (1 = not at all, 6 = extremely).” Mean response for this item was 3.83. Although subjective numeracy is related to objective measures of numeracy, the correlation between the two measures is moderate because people are not always accurate estimators of their own abilities (r = .46; Peters & Bjalkebring, 2015). In this sense, an individual scoring high in objective numeracy may perceive his abilities as low, and vice versa.
Objective Numeracy’s Role in Judgment and Decision Making
Research has demonstrated objective numeracy’s importance in judgment and decision-making processes (e.g., Peters, 2012). For example, individuals lower versus higher in numeracy are more likely to use mental shortcuts, or heuristics, when interacting with numeric information (Peters et al., 2006; Sinayev & Peters, 2015; Toplak et al., 2011). Less numerate individuals are more susceptible to framing effects, meaning their interpretation of information varies depending on how the information is presented (Peters et al., 2006). Additionally, the less numerate are more influenced by nonnumeric information, even when that information is irrelevant to the current decision. For example, less numerate individuals are more likely than the highly numerate to make decisions based on their current mood or emotional state (Peters et al., 2009; Traczyk & Fulawka, 2016). In contrast, individuals higher in numeracy are more likely to perform basic and complex number operations rather than use these heuristics to inform their decisions (Peters & Bjalkebring, 2015). Additionally, more numerate individuals are better able to pull affective meaning from numbers, which means they are able to better understand the “goodness” or “badness” of a number in certain contexts and, as result, use that numeric information in judgments and decisions (Peters et al., 2006; Petrova et al., 2013).
Numeracy may relate to beliefs about climate change because climate-change effects—both their magnitude and timeline—are often presented in probabilistic terms. For instance, one report suggests “that ~20-30% of known plant and animal species are likely to be at increased risk of extinction if increases in global average temperature exceed 1.5 °C to 2.5 °C” (Smith et al., 2009). For many individuals, this probabilistic information is likely difficult to parse, especially because it conveys uncertainty regarding climate change’s potential effects. Numeracy may play a role in how individuals react to this information and assess the risk, such that those higher in numeracy may use the explicit probability, whereas those lower in numeracy may use their affective reactions to the potential for animal extinction to develop their risk perceptions and beliefs about the consequences of climate change, or they may ignore the information entirely because they perceive the probabilities as low and meaningless.
Little research exists on how individuals higher versus lower in numeracy react specifically to numeric climate information. In one study, however, participants were queried about their worry and concern about polar bears (Hart, 2013). Prior to responding, experimental participants read a story that included either descriptive information (“most polar bears”) or numeric information (“12,000 out of 18,000 polar bears”) about the quantity of polar bears that may die; control participants did not read a story. Compared to the highly numerate, less numerate individuals reported greater worry about polar bears in the experimental conditions compared to control, consistent with the less numerate responding more emotionally when polar bear deaths were made salient. Comparison of the experimental conditions revealed that less numerate participants worried more when provided numeric versus nonnumeric passages whereas those higher in numeracy worried about the same in both frames (Hart, 2013). This result is inconsistent with prior numeracy research which generally shows that providing numeric information matters more to judgments among the highly numerate (e.g., Peters, 2012; Peters, Hart, Tusler, & Fraenkel, 2014), and more research is needed to understand these findings. It may be that the large quantity and proportion of polar bears at risk were particularly intuitive (e.g., Peters, Slovic, Västfjäll, & Mertz, 2008). Whatever the explanation, it seems likely that climate change messaging must target individuals at varying levels of numeracy differently to affect individuals higher and lower in numeracy in meaningful ways.
Subjective Numeracy’s Role in Judgment and Decision Making
Recent research has indicated different roles for subjective numeracy than objective numeracy in judgment and decision processes (Peters & Bjalkebring, 2015). In particular, individuals who rate themselves lower versus higher in subjective numeracy are less motivated to perform numeric tasks and are more likely to react negatively to numbers independent of their objective numeracy scores (Peters & Bjalkebring, 2015). If these motivational hypotheses are borne out in future studies, it may be revealed that individuals lower versus higher in subjective numeracy are less likely to seek out and interact with the numeric information often provided by climate scientists.
Can Numeric Understanding Be Improved?
Research, thus far, suggests that numeric understanding can be improved with long-term educational efforts to improve numeracy as well as short-term efforts to improve numeric understanding through careful choices of how information is presented. To begin, numeracy is related to education; individuals with more education tend to be more numerate (Lipkus et al., 2001; Galesic & Garcia-Retamero, 2010). As a result, adult numeracy is likely to increase either through greater formal education in general (Peters et al., 2010) or through focused educational efforts (Park & Brannon, 2014).
However, even the highly educated can be innumerate (Lipkus et al., 2001), indicating the importance of presenting numeric information in such ways that are accessible to individuals of all numeric abilities. Peters, Meilleur, and Tompkins (2014b) summarized five strategies found in the literature for best presenting information, some of which have been tested in climate-change scenarios:
1) Provide numeric information rather than descriptive information (e.g., 97% consensus vs. “most”)
2) Reduce cognitive effort (e.g., use graphs, present less information)
3) Provide evaluative meaning (i.e., the “goodness” or “badness” of the information, such as the negative consequences accompanying a >2 °F increase in global temperatures)
4) Draw attention to important information (e.g., by highlighting it)
5) Set up appropriate systems (i.e., use appropriate information presentation formats for the audience).
Reducing cognitive effort, for example, can be done through the provision of simple visual aids such as pictographs; research has demonstrated that such provision can help those higher and lower in numeracy understand health risks better (Garcia-Retamero et al., 2013; Hess et al., 2011). These findings may be relevant to the presentation of risks in the climate change domain.
Like scientific literacy, greater numeracy has the potential to improve individuals’ interactions with and responses to new scientific information such as that regarding climate change. Increased numeric ability has been hypothesized to lead to more accurate interpretation of numeric information, and thus more accurate beliefs regarding the causes and risks of climate change. However, in the next section recent research indicating that greater ability may also lead to greater motivated cognitions is examined.
Motivated Reasoning, Ability, and Beliefs about Climate Change
As previously mentioned, it is commonly believed that increased knowledge about and ability to understand scientific findings will lead to beliefs and attitudes about those scientific findings that are more consistent with experts. However, knowledge and ability may not be enough to overcome other factors contributing to climate change belief development and persistence. Individuals—experts and laypeople alike—are often motivated to maintain prior-held beliefs and values across a variety of domains, including climate change (Kunda, 1990). In this section, evidence for and against this hypothesis is examined, specifically the relations (or lack thereof) between climate change beliefs and climate change knowledge, scientific knowledge (literacy), and numeracy, as well as the potential for political ideology or worldview to affect these relationships.
Substantial evidence exists for biased information processing, also known as motivated reasoning, among individuals holding strong prior beliefs (see Kunda, 1990 for a review). In one early study, for example, researchers found that people holding strong beliefs about capital punishment interpreted the same information in diverging ways (Lord et al., 1979). All participants in this study were exposed to two empirical arguments, one supporting and one opposing the effectiveness of capital punishment at deterring crime. Individuals who were strongly opposed to the death penalty prior to the experiment rated the argument against capital punishment (and consistent with their own beliefs) as more convincing, whereas individuals who were strong proponents of the death penalty rated the supporting argument as more convincing. Participants also appeared to be more critical of the argument supporting the view opposite to their own, indicating potential for people to discount evidence that might disconfirm their prior-held beliefs. Ultimately, support for the death penalty became even further polarized after reading the arguments despite participants all reading the same evidence.
Motivated Reasoning and Non–Climate Science Domains
Polarization between people of disparate initial beliefs has also been found in other politically charged domains, such as healthcare reform in the United States. Nyhan et al. (2013), for example, demonstrated that attempts to correct misinformation about Obamacare death panels were received differently by individuals from different political groups, with the greater differences among those with the highest cognitive ability, assessed in this case as political knowledge. In this study, among individuals higher in political knowledge (but not those lower in political knowledge), people with warm feelings toward Sarah Palin (a Republican political figure) believed more strongly in a political myth about Obamacare death panels after receiving myth-correcting information. Those without warm feelings toward Palin believed less in the myth after receiving corrective information regardless of political knowledge. These results provide evidence that increased knowledge does not necessarily coincide with increases in evidence-based beliefs, and, in some cases, increased knowledge may actually cause individuals to be more likely to critique belief-disconfirming evidence and justify prior-held beliefs.
Motivated reasoning also appears to affect interpretation of numeric information, even when in the presence of an objectively correct response (as opposed to a subjective response such as policy support) (Kahan et al., working paper). This study showed that, in a politically neutral medical treatment scenario, individuals with higher numeracy were better at detecting whether use versus nonuse of skin cream made a skin rash better than those individuals with lower numeracy. However, when the task was reframed to represent the effects of gun control measures to increase or decrease crime (a politically divisive scenario), the impact of numeracy depended on political ideology. Among liberals, individuals higher in numeracy were more likely than less numerate liberals to respond correctly only if the data showed that crime decreased following enactment of gun control measures, a view consistent with liberal beliefs. In contrast, among conservative Republicans, more numerate individuals were more likely than the less numerate to respond correctly only if the data represented the opposite hypothesis that crime increases following introduction of a gun control measure, consistent with conservative beliefs. When changes in crime were inconsistent with a priori beliefs of either group, numeric ability was of little help in identifying the correct response to what was essentially a math problem. Instead, participants’ a priori beliefs appeared to dominate their numerical reasoning abilities. Overall, when interacting with scientific evidence, particularly in politically or morally charged domains, individuals appear to be motivated to come to specific conclusions based on prior-held beliefs and values.
Politicization and Polarization of Climate Change Beliefs
Beliefs about climate change differ across political ideologies and party identification, such that individuals identifying as conservative or Republican are less likely to believe that climate change is happening and that it is human caused, and are less likely to be worried about its consequences, compared to liberal or Democrat individuals (McCright & Dunlap, 2011). This belief gap appears to have widened over the last 10 years. In 2001, 60% of Democrats believed in climate change compared to 49% of Republicans, an 11% gap. In 2010, this gap had increased to 41%, with 70% of Democrats believing in climate change and only 29% of Republicans professing this same belief. Given the large and apparently increasing political polarization in beliefs about climate change, it is possible that motivated reasoning may also contribute to belief development and maintenance with respect to climate change, one of the topics of our next section.
Some evidence exists for the presence of motivated reasoning within the climate change domain based on prior belief systems. In one study, for example, participants read two articles with opposing views about climate change (positive vs. negative view of science supporting climate change existence); consistent with early work on motivated reasoning, interpretation of the articles varied depending upon a priori beliefs about climate change (Corner, Whitmarsh, & Xenias, 2012). People who were initially more skeptical of climate change rated the negative article as more convincing, whereas initially less skeptical participants rated the positive article as more convincing. Importantly, beliefs about climate change did not polarize further following presentation of either article. It is possible that knowledge and ability are the driving factors behind polarization. The following section will discuss this possibility.
Does More Climate Change Knowledge Lead to Greater Concern?
Some evidence about the relation between climate change knowledge and beliefs about climate change appears to run counter to a motivated cognition hypothesis. For example, in one study, greater knowledge of the health consequences of climate change were related to higher probability ratings of serious negative consequences of climate change and higher levels of worry (Sundblad et al., 2007). Additionally, more knowledge of the causes of climate change (i.e., greenhouse gases, burning of fossil fuels) was associated with higher probability ratings of serious negative consequences. These results have replicated in other studies, which have also found a negative relationship between climate change knowledge and climate change skepticism, such that those with higher knowledge were more likely to believe in climate change (Tobler et al., 2012b).
Research in high schools also has produced marginal support for a causal relation between knowledge and beliefs about climate change (Flora et al., 2014). Students in these schools were surveyed before and after an environmental education assembly. The survey measured climate science knowledge (using a multiple-choice test), positive engagement with climate change (placing students in one of six groups, organized by level of climate change belief, concern, and engagement), and pro-environmental behaviors. These surveys revealed a significant increase in knowledge of climate science post-assembly. Additionally, 38% of the students moved to a more engaged group, 49% stayed the same, and only 13% moved to a less engaged segment. These results suggest that many students were more likely to hold higher beliefs in, be more concerned about, and be more motivated to mitigate climate change after the assembly, implying that increased knowledge of climate change could cause increased beliefs and concerns about climate change. Less clear is whether the education assembly may have also enlightened students about their peers’ views on this topic. If these views tended to be less skeptical about climate change, on average, one would expect increased beliefs and concerns about climate change based on a motivated reasoning argument as well.
Motivated Reasoning and Climate Change Knowledge
However, other studies suggest the relation between climate change knowledge and concern about climate change is less straightforward and will depend on other factors, such as knowledge, trust in scientists, and political beliefs. In Malka, Krosnick, and Langer (2009), for example, knowledge was subjectively measured using self-reports (e.g., “How much do you feel you know . . .”); concern about climate change was measured using four items: personal importance, national seriousness (for the United States), global seriousness (for the world), and general seriousness. Malka et al. (2009) found that, similar to other studies, greater self-reported climate change knowledge was associated with more concern about climate change (in the personal importance, national seriousness, and global seriousness measures), but only among individuals who trusted scientists. Furthermore, among individuals who reported not trusting scientists, the opposite relation was found; greater self-reported climate change knowledge was associated with lower ratings of the overall seriousness of climate change. In addition to trust, political party identification also moderated the relation between climate change knowledge and concern. Their results revealed a positive relation between knowledge and concern among Democrats and Independents but no relation between climate change knowledge and concern among Republicans.
In contrast, however, Shi et al. (2015) found no such moderation; in their study, the positive relation between specific climate change knowledge and concern was significant independent of political ideology. However, Shi et al.’s study was conducted in Switzerland, where climate change science may be less politicized than in the United States. Regardless, these contrasting results imply that more research is needed to fully understand the impact of motivated reasoning within more knowledgeable versus less knowledgeable groups on climate change beliefs around the world.
Motivated Reasoning, Scientific Knowledge, and Numeracy
Motivated reasoning may also come into play with respect to more general cognitive abilities. Kahan et al. (2012) found that, as numeracy and scientific literacy increased, greater polarization of risk perceptions emerged based on cultural cognitions (see Figure 3). In particular, individuals possessing a hierarchical-individualistic worldview (i.e., beliefs in self-regulation and the idea that authority is and should be linked to social ranking) viewed climate change as less risky compared to those possessing an egalitarian-communitarian worldview (i.e., beliefs in collective concern for the individual and a less rigid social ranking system) who viewed climate change as more risky. However, these risk perceptions about climate change polarized further as both numeracy and scientific literacy increased, potentially suggesting some form of motivated reasoning about climate change. Kahan et al. (in press) interpreted their data as being consistent with the notion that people have a strong desire to remain part of their chosen groups (in this case, based on political ideology) and that those individuals with more skills may be better at recognizing and attaining their personal desires for group belonging even though society may be worse off because consensus on the facts cannot be reached.
Clearly, the politicization of climate change in the United States must be overcome, but strong evidence is not yet available to suggest that increases in knowledge of climate change science specifically or scientific knowledge and numeracy more generally can aid in that endeavor. Using evidence-based techniques to highlight scientific consensus of climate change may increase perceptions of the scientific consensus regardless of worldview (Myers et al., 2015) and may act as a gateway to increased concern about climate change (van der Linden et al., 2015). If true, then some hope exists for attenuating the political divide in the United States. It remains possible, however, that, even with evidence-based communication, public opinions may not change (Kahan, 2015), but only further research can tell.
It is a widely accepted scientific fact that our climate is changing and that this change is anthropogenic. Despite this scientific consensus, many individuals fail to grasp the extent of the consensus and continue to deny both the existence and cause of climate change; the proportion of the population holding these beliefs has been relatively stable in recent history (approximately 65% and 50%–65%, respectively). Most of the public also believe that they know a lot about climate change, although knowledge tests do not always reflect their positive perceptions.
We examined two frequent hypotheses: (a) that providing the public with more climate science information, thus making them more knowledgeable, will bring the beliefs of the public closer to those of climate scientists and (b) that individuals with greater cognitive ability (e.g., scientific literacy or numeracy) will have climate change beliefs more similar to those of experts. However, the data did not always support this proposed link between knowledge, ability, and beliefs. In particular, this article has reviewed the evidence for climate change knowledge, scientific literacy, and numeracy, to affect beliefs about climate change. Although increased knowledge can lead to increases in beliefs about climate change consistent with expert beliefs, results were mixed and sometimes moderated by other factors, such as trust in scientists and political ideology. Additionally, little evidence exists to suggest that higher levels of scientific literacy and numeracy relate to increases in those same beliefs; instead, these cognitive abilities appear to be used to support prior beliefs.
Ultimately, climate change beliefs appear to depend on more than accurate understanding of the issue. Although increased knowledge and ability should affect the degree of understanding, personal values and preferences may play a more central role in the development of attitudes and opinions (Fischhoff, 2013). Throughout this article, evidence was reviewed concerning the ways in which culturally defined worldviews and even trust in science can moderate the relationships between beliefs, knowledge, and ability. Several studies have shown the potential for motivated reasoning to be one of the biggest drivers of climate-change beliefs (and other nonclimate beliefs), such that experts and individuals with the highest cognitive abilities may be the most prone to seeking out and/or “manipulating” information (whether consciously or not) such that it favors their prior values and beliefs. Although these results appear at first disheartening, the evidence also suggests that various presentation methods of risks may increase beliefs across political and cultural groups; however, many of these methods may be short-term fixes. More research is needed to fully understand how to de-politicize climate-change understanding and beliefs.
As previously mentioned, climate education in high schools was effective at increasing individual beliefs about climate change and bringing them closer to scientists’ views (Flora et al., 2014). Because children and young adults may have more malleable worldviews than adults, education strategies may be more effective in these populations at overcoming political divides. Thus, the next generation may be raised in a way that de-politicizes climate change and brings more individuals together in support of climate-change mitigating behavior.
Among more mature adults, some modest evidence points towards self-affirmation as a potential solution. In climate change, presentation of climate change information may feel threatening to an individual whose a priori beliefs and values are inconsistent with that information. Self-affirmation could help make individuals less defensive and, thus, more open to considering the potential truth of the information. In Sparks et al. (2010), participants either self-affirmed or did not self affirm prior to reading information about climate change. In the self-affirmation condition, individuals affirmed their sense of their own kindness by listing instances of past compassionate behavior. Those in the no-affirmation condition instead listed their personal preferences, such as their favorite flavor of ice cream. Following this manipulation, participants read potential threatening passages about climate change and then completed a survey asking about their denial of the severity of climate change outcomes and their denial of their own ability to help mitigate those outcomes. On average, individuals in the self-affirmation condition showed less denial of their own impact on climate change (M = 2.74) compared to individuals in the no-affirmation condition (M = 3.12). However, there was not a significant difference in denial of climate change outcomes between these groups, though the pattern of means was in the hypothesized direction. These results somewhat support the potential for a protective effect of self-affirmation on interactions with climate change information, though more research is needed to understand the extent to which this theory applies to the climate change domain. It may be especially important to understand how self-affirmation might work to mitigate the polarizing effect of political identity and worldview on climate change beliefs.
The present article focused on a small but important piece of the climate-change puzzle, namely how individuals who differ in knowledge, scientific literacy, numeracy, and perceptions of climate change perceive and use information about climate-change risks differently. The challenge is not merely to communicate accurate information to the public but to understand how to present that information so that it is used in climate-change decisions.
Bauer, M. W., Allum, N., & Miller, S. (2007). What can we learn from 25 years of PUS survey research? Liberating and expanding the agenda. Public Understanding of Science, 16, 79–95.Find this resource:
Cook, J., Nuccitelli, D., Green, S. A., Richardson, M., Winkler, B., Painting, R., et al. (2013). Quantifying the consensus on anthropogenic global warming in the scientific literature. Environmental Research Letters, 8(2), 024024.Find this resource:
Corner, A., Whitmarsh, L., Xenias, D. (2012). Uncertainty, skepticism and attitudes towards climate change: Biased assimilation and attitude polarization. Climatic Change, 114, 463–478.Find this resource:
Ding, D., Maibach, E. W., Zhao, X., Roser-Renouf, C., & Leiserowitz, A. (2011). Support for climate policy and societal action are linked to perceptions about scientific agreement. Nature Climate Change, 1, 462–466.Find this resource:
Fagerlin, A., Zikmund-Fisher, B. J., Ubel, P. A., Jankovic, A., Derry, H. A., & Smith, D. M. (2007). Development of the subjective numeracy scale. Medical Decision Making, 27, 672–680.Find this resource:
Fischhoff, B. (2013). The sciences of science communication. PNAS, 110(Suppl. 3), 14033–14039.Find this resource:
Flora, J. A., Saphir, M., Lappé, M., Roser-Renouf, C., Maibach, E. W., Leiserowitz, A. A. (2014). Evaluation of a national high school entertainment education program: The alliance for climate education. Climatic Change, 127, 419–434.Find this resource:
Frederick, S. (2005). Cognitive reflection and decision making. Journal of Economic Perspectives, 19(4), 25–42.Find this resource:
Galesic, M., & Garcia-Retamero, R. (2010). Statistical numeracy for health: A cross-cultural comparison with probabilistic national samples. Archives of Internal Medicine, 1780, 462–468.Find this resource:
Gallup (2015). Climate change: Environment.
Gallup, (2016). U.S. concern about global warming at eight-year high.
Garcia-Retamero, R., & Cokely, E. T. (2013). Communicating health risks with visual aids. Current Directions in Psychological Science, 22(5), 392–399.Find this resource:
Hart, P. S. (2013). The role of numeracy in moderating the influence of statistics in climate change messages. Public Understanding of Science, 22(7), 785–798.Find this resource:
Hess, R., Visschers, V. H. M., Siegrist, M., & Keller, C. (2011). How do people perceive graphical risk communication? The role of subjective numeracy. Journal of Risk Research, 14(1), 47–61.Find this resource:
Hogarth, R. M., & Hillel, J. E. (1992). Order effects in belief updating: The belief-adjustment model. Cognitive Psychology, 24, 1–55.Find this resource:
Howe, P. D., Mildenberger, M., Marlon, J. R., & Leiserowitz, A. (2015). Geographic variation in opinions on climate change at state and local scales in the USA. Nature Climate Change, 5, 596–603.Find this resource:
Kahan, D. M. (2015). Climate-science communication and the measurement problem. Advances in Political Psychology, 36(Suppl. 1), 1–42.Find this resource:
Kahan, D. M., Peters, E., Dawson, E. C., & Slovic, P. (in press). Motivated numeracy and enlightened self-government. Behavioural Public Policy. Manuscript in progress.Find this resource:
Kahan, D. M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L. L., Braman, D., & Mandel, G. (2012). The polarizing impact of science literacy and numeracy on perceived climate change risks. Nature Climate Change, 2, 732–735.Find this resource:
Kunda, Z. (1990). The case for motivated reasoning. Psychological Bulletin, 108(3), 480–498.Find this resource:
Leiserowitz, A., Maibach, E., Roser-Renouf, C., Feinberg, G., & Rosenthal, S. (2015). Climate change in the American mind: October, 2015. New Haven, CT: Yale Program on Climate Change Communication.Find this resource:
Leiserowitz, A., Smith, N., & Marlon, J. R. (2010). Americans’ knowledge of climate change. Yale University. New Haven, CT: Yale Project on Climate Change Communication. Retrieved from http://environment.yale.edu/climate/files/ClimateChangeKnowledge2010.pdf.Find this resource:
Lewandowsky S., Gignac, G. E., Vaughan, S. (2012). The pivotal role of perceived scientific consensus in acceptance of science. Nature Climate Change, 3, 399–404.Find this resource:
Lipkus, I. M., Samsa, G., & Rimer, B. K. (2001). General performance on a numeracy scale among highly educated samples. Medical Decision Making, 21, 37–44.Find this resource:
Lord, C. G., Ross, L., & Lepper, M. R. (1979). Biased assimilation and attitude polarization: The effects of prior theories on subsequently considered evidence. Journal of Personality and Social Psychology, 37(11), 2098–2109.Find this resource:
Maibach, E. W., Leiserowitz, A., Roser-Renouf, C., & Mertz, C. K. (2011). Identifying like-minded audiences for global warming public engagement campaigns: An audience segmentation analysis and tool development. PLoS ONE, 6(3), 1–9.Find this resource:
Malka, A., Krosnick, J. A., & Langer, G. (2009). The association of knowledge with concern about global warming: Trusted information sources shape public thinking. Risk Analysis, 29(5), 633–647.Find this resource:
Mariconti, C. (2011, March 8). Understanding the disconnect on global warming. Association for Psychological Science.Find this resource:
Marx, S. M., Weber, E. U., Orlove, S. B., Leiserowitz, A., Krantz, D. H., Roncoli, C., & Phillips, J. (2007). Communication and mental processes: Experiential and analytic processing of uncertain climate information. Global Environmental Change, 17, 47–58.Find this resource:
McCright, A. M., & Dunlap, R. E. (2011). The politicization of climate change and polarization in the American public’s views of global warming, 2001–2010. The Sociological Quarterly, 52, 155–194.Find this resource:
Milfont, T. L. (2012). The interplay between knowledge, perceived efficacy, and concern about global warming and climate change: A one-year longitudinal study. Risk Analysis, 32(6), 1003–1020.Find this resource:
Miller, J. D. (1983). Scientific literacy: A conceptual and empirical review. Daedalus, 112(2), 29–48.Find this resource:
Myers, T. A., Maibach, E., Peters, E., & Leiserowitz, A. (2015). Simple messages help set the record straight about scientific agreement on human-caused climate change: The results of two experiments. PLoS ONE, 10(3), e0120985.Find this resource:
National Science Board (2016). Science and Engineering Indicators 2016. Arlington, VA: National Science Foundation (NSB-2016-1). Retrieved from http://www.nsf.gov/statistics/2016/nsb20161/#/report.Find this resource:
Nisbet, M. C. (2005). The competition for worldviews: Values, information, and public support for stem cell research. International Journal of Public Opinion Research, 17(1), 90–112.Find this resource:
Nyhan, B., Reifler, J., & Ubel, P. A. (2013). The hazards of correcting myths about health care reform. Medical Care, 51(2), 127–132.Find this resource:
Park, J., & Brannon, E. M. (2014). Improving arithmetic performance with number sense training: An investigation of underlying mechanism. Cognition, 133, 188–200.Find this resource:
Peters, E. (2012). Beyond comprehension: The role of numeracy in judgments and decisions. Current Directions in Psychological Science, 21(1), 31–35.Find this resource:
Peters, E., Baker, D. P., Dieckmann, N. F., Leon, J., & Collins, J. (2010). Explaining the effect of education on health: A field study in Ghana. Psychological Science, 21(10), 1369-1376.Find this resource:
Peters, E., & Bjalkebring, P. (2015). Multiple numeric competencies: When a number is not just a number. Journal of Personality and Social Psychology. Advance online publication.Find this resource:
Peters, E., Dieckmann, N. F., Vastfjall, D., Mertz, C. K., Slovic, P., & Hibbard, J. H. (2009). Bringing meaning to numbers: The impact of evaluative categories on decisions. Journal of Experimental Psychology: Applied, 15(3), 213–227.Find this resource:
Peters, E., Hart, S., Tusler, M., & Fraenkel, L. (2014a). Numbers matter to informed patient choices: The effects of age and numeracy. Medical Decision Making, 34(4), 430–442.Find this resource:
Peters, E., Meilleur, L., & Tompkins, M. K. (2014b). Numeracy and the Affordable Care Act: Opportunities and challenges. Appendix A. IOM (Institute of Medicine). In Health Literacy and Numeracy: Workshop Summary. Washington, DC: The National Academies Press.Find this resource:
Peters, E., Slovic, P., Västfjäll, D., & Mertz, C. K. (2008). Intuitive numbers guide decisions. Judgment and Decision Making, 3(8), 619–635.Find this resource:
Peters, E., Vastfjall, D., Slovic, P., Mertz, C. K., Mazzocco, K., & Dickert, S. (2006). Numeracy and decision making. Psychological Science, 17(5), 407–413.Find this resource:
Petrova, D. G., van der Pligt, J., & Garcia-Retamero, R. (2013). Feeling the numbers: On the interplay between risk, affect, and numeracy. Journal of Behavioral Decision Making, 27, 191–199.Find this resource:
Schwartz, P., & Randall, D. (October 2003). An abrupt climate change scenario and its implications for United States national security. California Inst of Technology Pasadena CA Jet Propulsion Lab.Find this resource:
Sellers, P. J. (2016, January 16). Cancer and climate change. The New York Times.Find this resource:
Schwartz, L. M., Woloshin, S., Black, W. C., & Welch, H. G. (1997). The role of numeracy in understanding the benefit of screening mammography. Annals of Internal Medicine, 127(11), 966–972.Find this resource:
Shi, J., Viscchers, V. H. M., & Siegrist, M. (2015). Public perception of climate change: The importance of knowledge and cultural worldviews. Risk Analysis, 35(12), 2183–2201.Find this resource:
Sinayev, A., & Peters, E. (2015). Cognitive reflection vs. calculation in decision making. Frontiers in Psychology, 6, 1–16.Find this resource:
Slovic, P. (1987). Perception of risk. Science, 236(4799), 280–285.Find this resource:
Slovic, P. (1999). Trust, emotion, sex, politics, and science: Surveying the risk-assessment battlefield. Risk Analysis, 19(4), 689–701.Find this resource:
Smith, J. B., Schneider, S. H., Oppenheimer, M., Yohe, G. W., Hare, W., Mastrandrea, M. D., et al. (2009). Assessing dangerous climate change through an update of the Intergovernmental Panel on Climate Change (IPCC) “reasons for concern.”PNAS, 106(11), 4133–4137.Find this resource:
Sparks, P., Jessop, D. C., Chapman, J., & Holmes, K. (2010). Pro-environmental actions, climate change, and defensiveness: Do self-affirmations make a difference to people’s motives and beliefs about making a difference?British Journal of Social Psychology, 49, 553–568.Find this resource:
Sundblad, E., Biel, A., & Gärling, T. (2007). Cognitive and affective risk judgments related to climate change. Journal of Environmental Psychology, 27, 97–106.Find this resource:
Tobler, C., Visschers, V. H. M., & Siegrist, M. (2012a). Addressing climate change: Determinants of consumers’ willingness to act and to support policy measures. Journal of Environmental Psychology, 32, 197–207.Find this resource:
Tobler, C., Visschers, V. H. M., & Siegrist, M. (2012b). Consumers’ knowledge about climate change. Climatic Change, 114, 189–209.Find this resource:
Toplak, M. E., West, R. F., & Stanovich, K. E. (2011). The cognitive reflection test as a predictor of performance on heuristics-and-biases tasks. Memory and Cognition, 39, 1275–1289.Find this resource:
Traczyk, J., & Fulawka, K. (2016). Numeracy moderates the influence of task-irrelevant affect on probability weighting. Cognition, 151, 37–41.Find this resource:
van der Linden, S. L., Leiserowitz, A. A., Feinberg, G. D., & Maibach, E. W. (2014). How to communicate the scientific consensus on climate change: Plain facts, pie charts or metaphors?Climatic Change, 126(1–2), 255–262.Find this resource:
van der Linden, S. L., Leiserowitz, A. A., Feinberg, G. D., & Maibach, E. W. (2015). The scientific consensus on climate change as a gateway belief: Experimental evidence. PLoS One, 10(2), e0118489.Find this resource:
Weller, J. A., Dieckmann, N. F., Tusler, M., Mertiz, C. K., Burns, W. J., & Peters, E. (2013). Development and testing of an abbreviated numeracy scale: A Rasch analysis approach. Journal of Behavioral Decision Making, 26, 198–212.Find this resource: