Show Summary Details

Page of

PRINTED FROM the OXFORD RESEARCH ENCYCLOPEDIA, CLIMATE SCIENCE (climatescience.oxfordre.com). (c) Oxford University Press USA, 2016. All Rights Reserved. Personal use only; commercial use is strictly prohibited. Please see applicable Privacy Policy and Legal Notice (for details see Privacy Policy).

date: 22 August 2017

Research Methods for Assessing Online Climate Change Communication, Social Media Discussion, and Behavior

Summary and Keywords

In recent years, increased Internet access and new communication technologies have led to the development of online methods for gathering public opinion and behavioral data related to controversial issues like climate change. To help climate-change researchers better adapt to the new era of online-based research, a review of, and methodological applications for, prevailing Internet-based research methods are provided here. Online surveys have become more common in the last decade for several reasons, including their relatively low administration cost, the pervasiveness of Internet communication, and declining response rates associated with traditional survey methods. Experiments embedded within online surveys have also become a useful tool for examining the extent to which online communications influence publics’ attitudes and behaviors. Other research methods that have gained growing attention from scholars are content analyses of online communication using big data approaches. By mining the seemingly infinite amount of user-generated content extracted from different social media sites, researchers are able to analyze issue awareness, responses to instant news, and emerging sentiments. This article provides a detailed overview of these Internet-based research methods, including their potential advantages and pitfalls, their applications in the science-communication and climate-change research fields, as well as suggestions for future research.

Keywords: climate change, public opinion, social media, research methods, survey, content analysis

The co-occurrence of the rise of online technologies and growing public awareness of the changing climate offers many opportunities and challenges for climate-change communication researchers. Just as traditional mass media like radio and television opened up avenues for research and development of communication theories in earlier decades, online and social media are increasingly seen as influential in how citizens understand and communicate about prominent social or scientific issues like climate change. Studying these information environments presents a number of challenges that are discussed below, but first it is important to recognize the characteristics of new media and their potential effects on public attitudes and behaviors related to climate change.

The New Science Information Environment

The issue of climate change is gaining prominence on media agendas worldwide, although attention to the issue has varied over time and by region (Brossard, Shanahan, & McComas, 2004; Schmidt, Ivanova, & Schäfer, 2013), and framing of the content has evolved. For instance, U.S. media coverage of climate change has tended to portray the issue as politically polarized, controversial, and uncertain (Zehr, 2000), with media overamplifying scientific disagreement (Dixon, McKeever, Holton, Clarke, & Eosco, 2015). Depictions of scientific uncertainty have also been found in Russian media coverage (Wilson Rowe, 2009), but elsewhere news coverage has tended to focus on the scientific certainty of the issue, including in the U.K. (Carvalho & Burgess, 2005), Sweden, Norway (Olausson, 2010), India (Billett, 2009), Bangladesh (Das, Bacon, & Zaman, 2009), and Mexico (Gordon, Deines, & Havice, 2010). Some research suggests that media discourse around climate change varies globally in part due to countries’ industrialization or dependence on fossil fuels, although most of the research has been conducted in developed nations (Schmidt et al., 2013), which may have a distinct perspective. For instance, news coverage in industrialized nations has tended to present climate change as a geographically distant and impersonal issue that has a greater impact on distant “others” (Leiserowitz, Maibach, Roser-Renouf, Feinberg, & Howe, 2012).

In addition to regional differences, social and political factors have influenced how climate change is communicated about and perceived. The scope and inherent complexity of the issue also may make it difficult for audiences to comprehend it or accept it as personally relevant (Brugger, Dessai, Devine-Wright, Morton, & Pidgeon, 2015). These characteristics and nuances of the societal and scientific implications of climate change require communication researchers’ attention as they evaluate relevant online dialogue and assess the extent to which such discourse influences public beliefs and behaviors.

Over the past few decades, the development of new communication technologies has led to dramatic changes in the media landscape and news consumption patterns (Su, Akin, Brossard, Scheufele, & Xenos, 2015). The Internet is a principal source of climate-change information for many concerned publics and serves as a platform for professional communicators and as a venue for citizens to express beliefs and opinions. The World Bank reported that, in 2014, approximately 40% of the world’s population used the Internet, although the proportions are much higher in industrialized nations. For instance, about 78.1% of citizens in the European Union and 87.4% of Americans use the Internet, but on average just 8.6% of citizens in countries classified as “least developed” go online (The World Bank, 2016). Disparity in Internet access and use worldwide is undoubtedly germane to climate-change communication research, particularly as climate-change policies and effects become more localized.

The rapid emergence and pervasiveness of particular websites, such as social network platforms and other Web 2.0 applications, have also changed how audiences get and share information about issues like climate change. Today, more than half of American adults who go online use two or more of the most popular social media sites, such as Facebook, Twitter, Instagram, Pinterest, or LinkedIn (Pew Research Center, 2015). In the United States, more than 70% of online adults use the social media site Facebook.

Online users turn to social media platforms not only for networking and entertainment (e.g., boyd & Ellison, 2008) but also to gather a variety of information and news, including politics, sports, business, and science and technology (Pew Research Center, 2015). About 47% of Facebook users and 56% of Twitter users report regularly seeing posts related to science and technology on these sites. The number of Americans who name the Internet as their main source of science and technology news now exceeds the number that name television (National Science Board, 2016). In 2016, it was reported that 67% of Americans go online for information about specific scientific issues (National Science Board, 2016).

This rapid uptake of online media has influenced how audiences consume news about science and technology, share scientific information, and follow scientific and technological developments (Brossard, 2013). In the Web 2.0 environment, many new media sources do more than simply disseminate news and information, they also incorporate a social element in which audiences can participate by sharing, commenting, or liking the content. These personal commentaries on the news send cues to other audience members about the associated content and the relative significance of the issue. In some cases, these cues may actually influence attitudes and perceptions about scientific issues (Anderson, Brossard, Scheufele, Xenos, & Ladwig, 2013; Spartz, Su, Griffin, Brossard, & Dunwoody, 2017).

Because there are so many options available, getting news online is a more user-driven experience than getting news via traditional outlets like print newspapers or television news. In legacy media, editors and producers have most of the control over the selection and priority of news content. Online audiences have so many choices that many end up selecting themselves into an “echo chamber” of information that may continue to reinforce preexisting beliefs (Jamieson & Cappella, 2008).

Audiences online can even opt into an echo chamber inadvertently, for example, by way of search engine algorithms and social media feeds that continuously tailor results based on users’ search histories or preferences. For instance, Ladwig, Anderson, Brossard, Scheufele, and Shaw (2010) found that Google searches about nanotechnology excessively directed individuals towards health related websites which might influence users’ future searches and reinforce Google suggestions and website rankings. News organizations have also become more reliant on algorithms to recommend content to their readers based on their interests or by linking to data from users’ social media profiles (Thurman & Schifferes, 2012). These examples illustrate the striking changes that have occurred in science information environment over the past several decades.

The contemporary science-communication landscape has changed significantly not only for consumers (Brossard, 2013), but also for those researching science communication. In particular, web-based technologies can enable researchers to recruit participants who would otherwise be hard to reach or reluctant to share their opinions. Online media also offer new means to capture and observe public attitudes about emerging and complex issues. In sum, the online communication environment has opened up new avenues for research by way of methodological advancements and the availability of new data sources.

Here, we emphasize two types of emerging research methods: online surveys and automated content analysis. Each method uniquely allows researchers to analyze issue-specific public opinions, attitudes, or behaviors by tapping data sources with relatively low financial cost and effort.

Because of the growth in Internet access and use worldwide, online surveys are increasingly prevalent and have even in some cases replaced traditional methodologies like telephone or pen-and-paper surveys (Baker et al., 2010, 2013). Online surveys also have the advantage of being able to accommodate embedded experiments. Meanwhile, the ubiquity of social media and the profusion of opinions expressed instantaneously and voluntarily across online platforms provide researchers data alternatives that can complement traditional public opinion and survey research.

The following sections review the key characteristics of the two types of online research methods before introducing their applications for assessing online communication patterns and user behaviors related to climate change and other scientific issues. Potential pitfalls are described and strategies for effectively employing these tools in future climate-change studies are proposed.

From Traditional Surveys to Online Surveys

Survey research has historically been one of the most effective ways to reliably measure public opinion about issues, events, or people (Kerlinger & Lee, 1999). Defined as a set of standardized questions asked to a large sample of a population, survey research lends itself to the exploration of relationships between individuals and sociological, psychological, and behavioral phenomena (Kerlinger & Lee, 1999). A survey can be administered in person, by phone, by mail, or online. Typically, a researcher establishes a population of interest and then samples from that population in order to generalize the findings. To obtain reliable and valid data, researchers must collect data systematically, employing the same measures consistently across the sample. Researchers should also use measures that, insofar as they know, tap the attitudes, perceptions, behavioral intentions, and other factors they aim to capture. These criteria are fundamental for collecting data that are generalizable to the population and indicative of what the public actually knows or believes (Scheufele, 2010).

Given the effectiveness of surveys, it is not surprising they are one of the most widely used methods for gauging public attitudes and knowledge about science and technology issues. Overall, survey methods have been largely utilized in climate-change communication to shed light on long-running trends, to delineate underlying mechanisms of opinion formation, and to provide feedback on policy implications for the climate-change community, which includes scientists, policymakers, advocates, and other stakeholders.

Survey research has allowed researchers to track familiarity and concern about anthropogenic climate change over time and throughout the world (Nisbet & Myers, 2007). In July 1986, a U.S. survey first asked participants how familiar they were with global warming, and 39% of respondents reported having heard or read about the “greenhouse effect.” Just 20 years later, a Pew study revealed that 91% of Americans had heard about global warming (Nisbet & Myers, 2007). Since the 2000s, United States polls conducted by numerous survey agencies have tracked increasingly polarized public opinions over climate change (Capstick, Whitmarsh, Poortinga, Pidgeon, & Upham, 2015; Dunlap & McCright, 2008). Although the majority of surveys on climate change have been conducted in industrialized regions like the United States, Australia, and Europe, cross-cultural studies allow researchers to observe differences in public opinion about climate change around the world (Lorenzoni, Leiserowitz, de Franca Doria, Poortinga, & Pidgeon, 2006; Lorenzoni & Pidgeon, 2006). Climate change and other postnormal science issues tend to have transnational or global implications (Saloranta, 2001), which makes awareness of cultural differences in public perception incredibly valuable. Cross-national data can provide insight into cross-national differences in attitudes and values that influence perceptions of climate change (Akin, 2015). Polls have also tracked how predispositions (McCright, Dunlap, & Marquart-Pyatt, 2016), skepticism (Whitmarsh, 2011), and knowledge (Lee, Markowitz, Howe, Ko, & Leiserowitz, 2015) affect concern about climate change worldwide. For instance, researchers indicate that Latin American and African citizens have grown more concerned than citizens in other areas, with 80% or more people in these regions believing climate change to be a serious threat to their lives (Lee et al., 2015).

Emerging Online Survey Research

While telephone surveys administered on landline telephones used to be the norm for survey research, recent declines in the proportion of households with landline numbers require that survey researchers adjust their strategies for reaching representative samples of citizens (Baker et al., 2013). The International Telecommunication Union reports there are 97 mobile phone subscriptions per 100 people worldwide, although this may misrepresent actual mobile penetration, given that some individuals have multiple subscriptions (International Telecommunication Union, 2015). Approximately one fourth of Americans now use a mobile phone only (i.e., they do not have a landline), a proportion that is even higher for some, including Hispanics and African-Americans (Pew Research Center, 2016). This phenomenon has compelled survey companies to call both cell phones and landline phones to obtain representative samples of citizens (AAPOR, 2016).

There can be logistic and substantive differences between landline and cell phone surveys, however. Surveys conducted on cell phones are costlier, require more time to screen respondents, and can yield different responses compared to landline surveys. For example, cell phone survey respondents may have less privacy when responding and may adjust their answers or may be distracted (Lynn & Kaminska, 2013). Despite these differences, it remains imperative to contact respondents via both landlines and cell phones to obtain a representative sample of the population.

General willingness to participate in telephone surveys has also declined significantly in recent years as individuals receive more unsolicited (e.g., telemarketing) calls and have the ability to screen calls through caller identification (Pew Research Center, 2016). The drop in phone survey response rates has resulted in higher costs for data collection and, along with the increasing prevalence of online and mobile technologies, has made online surveys a desirable and increasingly common alternative to telephone surveys.

Advantages of Online Survey Research

Online surveys offer many advantages to the researcher, including the relative ease of data collection (Couper, 2000), the ability to survey hard-to-reach populations, the curtailing of social desirability biases in responses, and the opportunity to capture additional data. Perhaps the most obvious advantage of online surveys is that researchers can reach a large number of respondents more quickly and with decreased cost in comparison to traditional mail and phone survey methods (Couper, 2000; Kaplowitz, Hadlock, & Levine, 2004; Wright, 2006). Moreover, online research offers researchers the opportunity to access thousands of target respondents who would otherwise be difficult or even impossible to reach through other traditional channels.

Internet-based surveys can save time and financial cost for researchers when reaching a large number of people who share common characteristics in distant locations. Many commercial sites, such as YouGov, Asia Opinions, and Qualtrics, now offer services or software packages for individuals to collect data and information from a large number of members of the general population and make online survey research easier and faster (Couper, 2000; Wright, 2006). That said, the prevalence of Internet access and the rapid development of user-friendly survey technologies no longer require that large-scale surveys be administered by large research organizations. For an issue that is associated with global impact, such as climate change, web surveys offer the opportunity to easily collect and compare public opinions across different regions.

Some research suggests that self-administered online surveys may elicit more honest responses and reduce the influence of a social desirability bias (Baker et al., 2010). For example, researchers at Pew Research Center (Keeter, 2015) conducted an experiment with their American Trends Panel in which respondents were randomly assigned to receive the same survey by either web or phone. The respondents given the phone survey were more likely to say that certain people, particularly those who tend to be marginalized, such as Hispanics or gays and lesbians, faced “a lot of discrimination,” and respondents were more likely to say they were happy with their family and social life. These findings suggest that respondents tend to give more socially desirable responses when they are interacting with an interviewer. Furthermore, those who responded to the web survey were more likely to express negative sentiments about politicians (Keeter, 2015). What Pew reports is consistent with a meta-analysis of research on response extremity in surveys conducted by telephone compared with other formats, which found that phone surveys yield more extreme positive responses but not more extreme negative responses, supporting the possibility that the presence of an interviewer may discourage respondents from expressing negativity (Ye, Fulton, & Tourangeau, 2011).

It is important for researchers to account for response biases when surveying the public about climate change; for example, individuals surveyed by phone may be more likely to overstate their tendency to behave in a climate-friendly way. Questions related to pro-environmental attitudes and behaviors may elicit socially desirable responses (Bord, Fisher, & O’Connor, 1998), increasing the number of respondents who say they do act, or would act, in environmentally beneficial ways. Alternatively, survey respondents may feel more compelled to adhere responses to the expectations of their cultural or political identity (Kahan et al., 2012) in the presence of an interviewer (for example, conservative respondents may overstate their denial of climate change because they have identified themselves as politically conservative). Thus, online surveys may efficiently decrease social desirability or worldview biases when exploring public opinion about climate change.

Concerns about Online Survey Research

While online surveys are a promising solution to the increasing cost and effort required to reach a large sample by phone, they also come with methodological challenges. Currently, the predominant online survey companies are relying on nonprobability-based recruitment. Only a few survey panels rely on traditional probability-based methods (e.g., random-digit dialing, abbreviated RDD) to invite respondents to join an online panel (Baker et al., 2010).

Scholars express concerns about the shortage of panels that use probability-based sampling techniques. Nonprobability samples violate the principles of inferential statistics, meaning that responses should not be generalized to the broader population (Baker et al., 2010; for a more complete overview of sampling strategies, advantages, and disadvantages, see Callegaro et al., 2014).

Nonprobability-based online panels are usually composed of individuals who agree to complete the company’s surveys in exchange for a monetary incentive (Baker et al., 2010; Krosnick, Presser, Fealing, & Ruggles, 2012). These opt-in panel participants are therefore not chosen randomly from the population but are instead a self-selected group of individuals. Representativeness of such nonprobability panels is particularly a concern when large segments of a population do not have reliable Internet access, technical literacy, or language proficiency, or are otherwise beyond the reach of nonprobability online panels (Keeter, 2015).

There is also concern in the research community about “professional” survey respondents who more frequently opt in to surveys than other panelists. For instance, one study found that 10% of panelists accounted for more than 80% of survey responses among the ten largest online opt-in survey panels (Krosnick et al., 2012). Research also suggests that there is a high level of nonresponse and significant coverage errors when constructing nonprobability online panels (Baker et al., 2010). In sum, the survey research community continues to express concerns about the quality of nonprobability online panel data and the accuracy of estimating population parameters using such panelists.

Web surveys conducted with probability-based (e.g., RDD) samples have been shown to yield lower overall response rates, although they tend to have less item nonresponse, than telephone surveys (Fricker, Galesic, Tourangeau, & Yan, 2005). This potentially biases the representativeness of even probability-based online panels. But many will contend that a well-conducted probability-based sample of online respondents, even with a low response rate, is still superior to a nonprobability sample of online panelists (Brick, 2011).

As outlined earlier, the increasing arduousness of collecting an adequate random sample using a strategy like RDD can be cost- and time-prohibitive for researchers. However, there are currently no comparable, web-based modes for contacting members of the online population. For example, even if it were possible to randomly invite prospective respondents by email, email addresses are not generated or assigned to each member of the public at random. Users create their own email addresses and many people have multiple addresses or share an email account with others, precluding randomized probability sampling through web-based contact. Laws and industry standards also inhibit and discourage sending unsolicited messages to invite people to participate in online surveys (Baker et al., 2010).

That said, some international and national organizations, including the International Organization for Standardization and the Council of American Survey Research Organizations, have established standards, lists of terminologies, and guidelines for online panels that can help survey companies and researchers address some concerns as they continue to develop and refine their sampling techniques for online panels (Callegaro et al., 2014). The recent report conducted by scholars representing the National Science Foundation advisory committee for the social, behavioral, and economic sciences subcommittee also advises on the challenges and opportunities facing current online survey research (see Krosnick et al., 2012).

Crowdsourcing

In addition to the development of online panels for survey research, the Internet has also facilitated access to an entirely new pool of subjects for public opinion research and experimentation via “crowdsourcing.” Crowdsourcing refers to an open invitation to a large group (“outsourcing”) to complete a task (Mason & Suri, 2011).

One of the largest and most widely known tools used for this purpose is Amazon’s Mechanical Turk (MTurk). MTurk was originally created to outsource small “human intelligence tasks,” things that computers could not do, for relatively little pay. Eventually, researchers realized MTurk could be a source for a large sample of low-cost respondents and they started recruiting MTurk workers to participate in online surveys and experiments. Other similar tools, such as Google Consumer Surveys, have also become popular for surveys and experiments. Because it was the first of its kind, MTurk tends to be the archetype used to compare crowdsourced samples to other types of samples.

Much like some online panels, crowdsourced survey respondents are not obtained through probability-based sampling. Researchers have found evidence that some characteristics of American MTurkers are significantly different from the general U.S. population (Mason & Suri, 2011). One comparison of an MTurk sample to an Internet survey sample for the American National Election Panel Study in 2008–2009 found that MTurkers were significantly less educated, were younger, had lower mean income, and were significantly different in terms of race, marital status, and religion. Another study found that, compared to other face-to-face samples, the MTurk sample had a significantly lower proportion of female respondents (Berinsky, Huber, & Lenz, 2012).

Yet some researchers have noted that MTurkers are more demographically diverse than another commonly used convenience samples, such as American college students (Buhrmester, Kwang, & Gosling, 2011). While these are relatively recent comparisons of different types of samples, they should not be seen as static. Researchers using MTurk or other crowdsourcing tools for surveys should keep in mind that these marketplaces and their workers will likely continue to evolve (Buhrmester et al., 2011), which should be accounted for when making inferences about the data (Mason & Suri, 2011).

Looking Forward: Opportunities for Mobile-Based Surveys

Not only has Internet use dramatically changed communication processes, but also how people use the Internet has rapidly changed, opening up new opportunities for survey research in the communication field. One of the most prevalent changes is the penetration of mobile technologies and devices. Pew Research Center reports that, as of 2015, 68% of Americans had smartphones and 45% had tablet computers (Anderson, 2015), and many have multiple devices. A 2012 report indicated that 7% of U.S. adults relied solely on their smartphone for Internet access (Smith, 2012).

Researchers should bear in mind when distributing web-based surveys that they are readable and display properly across different types of devices (e.g., considering screen size and operating system) and service providers (Link et al., 2014). Situational factors, such as whether respondents are multitasking or in the company of others, have also been shown to affect responses (de Bruijne & Oudejans, 2015). While online and mobile data collection offer numerous possibilities for employing multimedia and visuals in surveys, researchers should ensure these modes render adequately across devices and platforms. This is particularly important for surveys with complex numeric or pictorial information, which may be the case with surveys on climate change.

Mobile technologies also offer new methods for administering surveys and collecting new types of data about individuals’ beliefs and behaviors. Smartphones can capture users’ location and movement using the Global Positioning System (GPS), which could track individuals’ exposure to environmental conditions in real time (Chaix et al., 2013). Accelerometers (i.e., movement trackers) in many smartphones could capture information about transportation or energy-related behaviors.

In terms of climate change, weather conditions have been linked to public perceptions and variability in concern (e.g., Egan & Mullin, 2012; Li, Johnson, & Zaval, 2011; Zaval, Keenan, Johnson, & Weber, 2014), but exposure to local conditions could be operationalized in different ways. Egan and Mullin (2012) matched U.S. survey responses over several years to the respondents’ local temperature variation from normal in the week preceding their participation, which was coordinated to respondents’ zip codes. The analyses indicate that temperature variability and extremity had a strong effect on climate-change beliefs, particularly among those who are not strong partisans or highly educated. Zaval et al. (2014) assessed climate-change attitudes based on respondents’ perceptions that the temperature deviated from average. The studies incorporated data on local conditions from respondents’ recall and manual geocoding. Internet technologies can record more precise or in-the-moment weather data automatically via GPS or Internet Protocol (IP) address.

The capacity for mobile technologies to capture visual data, i.e., individuals taking photos or videos of local conditions or other climate-relevant circumstances, could also be used to complement traditional survey data. One method, photovoice, is used in participatory action research to encourage community members to photograph their experiences (Wang & Burris, 1997; Wang, Yi, Tao, & Carovano, 1998). Photovoice also incorporates critical reflection in which participants choose, contextualize, and identify themes in the images. This method has been applied to analyze community-level responses to climate change, for instance, in coastal areas in Australia (Baldwin & Chandler, 2010) and Thailand (Bennett & Dearden, 2013). The comparable photo-elicitation method is usually a complement to qualitative research, which either presents photos to interviewees to elicit responses or requests that the interviewee generate an image (Van House, 2006). One photo-elicitation study conducted with farmers in Nova Scotia aimed to capture perceptions about climate change via imagery without having to ask them directly about the issue in case it biased their responses (Sherren & Verstraten, 2013). The capacity to collect imagery or other multimedia will likely be a growing method in climate-change communication research.

In addition to the data mobile devices can obtain, these technologies can assist in actual survey administration and recruiting or retaining participants. For example, respondents could be recruited or easily directed to surveys or questionnaires by scanning QR (Quick Response) codes or by SMS (“short message service”) or MMS (“multimedia messaging server”) (Link et al., 2014). In addition, applications for surveys or other modes of data collection could be designed to leverage the native capabilities of mobile devices (e.g., camera, GPS, microphone), to avoid technical malfunctions that may occur if a respondent is simply directed to the device’s Internet browser. Mobile devices, particularly smartphones, can also allow researchers to collect in-the-moment data about behaviors and attitudes through texting (SMS) or MMS (Link et al., 2014). To date, these tools have not yet been widely used in science communication or climate-change communication. Nevertheless, these methods could effectively record self-reported information, such as an individual’s beliefs, attitudes, or intended behaviors, in tandem with actual behaviors and exposure to particular stimuli in climate-change research.

Growth of Experiment-Embedded Online Survey Research

Experiments embedded in online surveys are an increasingly common method used across scholarly fields to evaluate the processes of opinion formation and change (Barabas & Jerit, 2010). Internet and mobile surveys can incorporate new methods of measuring public attitudes alongside experiments, creating a “virtual laboratory” for researchers to establish causal mechanisms within a wide range of hypotheses. With the prevalence of the Internet as an information source (National Science Board, 2016), such experiments can replicate how information or media content is conveyed in the real world, thus giving external validity to an experiment’s findings (Reips, 2002).

Online and mobile surveys can embed visuals, videos, text, or other interactive tools that can be manipulated to test for differences in effects of particular messages. Online experiments can also incorporate the interactive elements and cues that typify social media and unobtrusively capture responses and online behaviors (Strohmaier & Wagner, 2014). For instance, one online experiment found that manipulating incivility in reader comments to a blog post about nanotechnology polarized perceptions of the risks of the technology (Anderson, Brossard, Scheufele, Xenos, & Ladwig, 2013). Users—particularly those who may be more inclined to base their opinions on others’ views or what is expected of them by others—were also shown to use the number of views a video gets on YouTube to shape their perceptions of how concerned other Americans are about climate change (Spartz et al., 2017).

Online experiments are increasingly used to test causal mechanisms in science-communication research. In a climate science-relevant example, Jamieson and Hardy (2014) conducted an online experiment testing whether leveraging of trusted scientific institutions to visualize and analogize Arctic Sea Ice extents enabled audiences to think more systematically about climate-change effects; the investigators found that such a message minimized influence of partisan cues. Wong-Parodi, Fischhoff, and Strauss (2014) found that an interactive tool depicting climate-related data could influence knowledge, preferences, and understanding of a salient climate-change decision: whether to move to an area that might experience climate-change-induced coastal flooding.

Conclusion

There are numerous ways that social scientists can use online tools for surveys and experiments, while there are limitations to these methods when using nonprobability-based sampling strategies. Because online surveys and experiments are relatively new, much of the work has been pioneering, requiring consideration of the rapidly changing ways in which people communicate with each other and obtain information. This compels researchers to continuously stay up to date on improvements to methodologies and sampling strategies, and to closely monitor the makeup of samples from online panels or crowdsourcing sites like MTurk.

Additionally, while online and mobile tools for survey and experimental research have provided new opportunities for data collection, it is important that researchers adhere to ethical research practices. Privacy concerns and changing norms and policies related to mobile technologies should always be paramount. Researchers must obtain appropriate consent from respondents and protect private information in every possible way in accordance with their institution’s Internal Review Board, if applicable (Link et al., 2014). Discussions about individuals’ privacy in terms of their online and mobile behaviors will likely be an ongoing and evolving dialogue.

Content Analysis of Online Media: Emerging Hybrid Methods

Content analysis serves many purposes in communication research. It allows researchers to observe opinion patterns embedded in communication content, to compare frequency of communications, and to identify communication behaviors and intentions of message creators (e.g., Berelson, 1952; Krippendorff, 2012). Human- and computer-based coding are the two most commonly used coding methodologies in content analysis. Today, textual content analyses that do not rely on computer assistance are rare, due in large part to the time-saving advantages of computers and growing availability of text in digital formats (McMillan, 2000). The unprecedented amount of content supplied by big data also exceeds the capacities of traditional content-analysis methods (Wang, Can, Kazemzadeh, Bar, & Narayanan, 2012).

Researchers usually focus on two types of content in communication patterns. Manifest content analysis is the practice of coding content that is categorical and observable, such as word counts and character string identification (Holsti, 1969; Riffe, Lacy, & Fico, 2005). This type of analysis is typically characterized by high reliability but low external validity because interpretation is relatively superficial and therefore has little social significance. Latent content analysis, on the other hand, focuses on in-depth interpretation of the underlying meanings embedded in texts. Human coding is often preferred over computational coding in latent content analysis because it requires understanding complex expressions (Krippendorff, 2012; Sjøvaag & Stavelin, 2012).

Although it minimizes the tedium of manually coding content in large-scale digital data, computer-based content analysis used to be limited in its capacity to categorize the subtle meanings in texts, which left this task to human coders (Su et al., 2017). Sentiment analysis or opinion mining can be viewed as one form of content analysis that classifies sentiments, opinions, and emotions expressed in text-based content (Pang & Lee, 2008; Wang et al., 2012). However, new content-analysis tools have been developed that combine the merits of both computational and manual coding. These hybrid methods allow analysts to employ human and computational intelligence to gain a comprehensive understanding of latent expressions embedded in large amounts of digital text (e.g., Su et al., 2017).

One hybrid technique, the supervised learning method, first uses human coders to categorize a sample of online texts. The program then learns from the manually coded data and labels the rest of the unread documents accordingly (Annett & Kondrak, 2008; Boiy & Moens, 2009; Liu & Zhang, 2012). Many supervised-learning content-analysis programs are on the market and can be purchased for academic or commercial use. Crimson Hexagon’s ForSight and DiscoverText programs, for example, require human coders to identify underlying concepts in a sample of randomly collected social media posts and then apply those learned patterns to systematically analyze all available text. These intelligent algorithms demonstrate potential to maximize the latent validity of measurement of human-based coding as well as the reliability and efficiency of data collection and analysis by way of computational coding approaches.

Researchers increasingly use content posted on social media, such as Twitter and Facebook, to track public sentiment and perceptions. Twitter is a microblogging service with more than 313 million monthly active users from across the globe in 2016, who can broadcast real-time information in messages (“tweets”) that are 140 characters or less (Twitter, 2017). Because tweets are relatively short and easy to create, they tend to be expressions that are highly visible, interactive, and instantaneous. Content analysis of social media content like tweets can also complement or even replace public opinion survey data, which may be costly and time-consuming and is not collected in real time as events of interest unfold (An et al., 2014).

Content Modes

Twitter has several characteristics that make it appealing to researchers conducting large-scale, dynamic content analysis. Twitter users tend to post messages that are public and visible to every user (Pew Research Internet Project, 2013), unlike Facebook, which has many users but who tend to limit access to their profiles and updates to friends or followers. The Twitter stream also includes the spectrum of communications from personal, private, and semipublic expressions to social groups, politicians, and media outlets. Mining Twitter data therefore enables researchers to examine emerging sentiments within a diverse community constituted by what were previously distinct public, academic, political, scientific, environmental, and journalistic groups. The ebb and flow of users’ concerns can also be observed by studying how the tweets change over time, allowing researchers to understand the extent to which high-profile events or new information influences the perceptions of the Twitter community.

One important qualification here is apprehension about how the Twitter community can be compared to a population, and therefore how its content compares to public attitudes at large. Twitter users are not a representative sample of the global population—they are individuals and groups who choose to express their opinions online, which means they may be distinct from the rest of the population (Baker et al., 2013; boyd & Crawford, 2012).

Still, content on social media is of interest to researchers. Social media provide masses of data, obtained with relative ease, that may reflect significant social phenomena. While there are certainly integral methodological, analytical, and conceptual differences between social media content and survey data, some propose that social media “reflect the broader population’s collective opinion and experience, through a range of (not yet fully understood) possible mechanisms” (Schober, Pasek, Guggenheim, Lampe, & Conrad, 2016, p. 185). In particular, the content can be of interest because it is designed to appeal to its users’ (and public) interests, propagates resonant ideas, reflects issues that are prominent on public and media agendas (Schober et al., 2016), and is disseminated by elite communicators or intermediaries.

Because of these prospects, researchers increasingly turn to the Twitter stream to monitor local and global dialogues on controversial issues like climate change, including opinions expressed by opinion leaders as well as issue publics. Studying the data on Twitter also allows scholars to examine its influence in social movements and the role of Twitter in response to ongoing events or news. As one example, Segerberg and Bennett (2011) examined the use of two hashtags, #Thewave and #Cop15, in climate-change protests during the 2009 United Nations Climate Summit in Copenhagen to capture the individual actors and organizational agents involved and related gatekeeping processes.

A common goal of archiving and analyzing Twitter discourses is to monitor real-time discussions and track the long-term influences of controversial issues, events, and emergencies. Most of the analyses address two types of research questions. One type examines the change of varied sentiments expressed on social media (e.g., Hopke & Simis, 2015; Runge et al., 2013). The second identifies distinct themes in social media discussion about a particular issue or emerging event across a large collection of texts (e.g., Driscoll & Thorson, 2015; Hodges & Stocking, 2016; Yeo, Xenos, Brossard, & Scheufele, 2014).

These types of analyses have been applied to energy-related issues that are relevant to climate-change communication. For instance, researchers observed changes in sentiment and thematic content in over 11 million nuclear-related tweets before and after the Fukushima Daiichi nuclear power plant incident in Japan (Li et al., 2016). Hopke and Simis (2015) examined opinion valences and certainty expressed in nearly 65,000 tweets using specific hashtags about hydraulic fracturing over a two-week period. These results have implications for how online communities understand, develop, and mobilize around distinct forms of energy development.

Communication scholars have applied the hybrid-based content-analysis methods to examine other climate-change-related conversations on Twitter. In one relevant example, scholars explored the prevalence of a morality and ethics frame in tweets related to Pope Francis’s encyclical calling for action on climate change and his U.S. visit (Eichmeier et al., 2016). Other studies have combined content analysis with daily temperature records to track change in climate-change sentiment (e.g., Griffin et al., 2014; Lineman, Do, Kim, & Joo, 2015) or associations with the terms “climate change” vs. “global warming” (Griffin et al., 2014) and views based on geographic locale (Howe, Mildenberger, Marlon, & Leiserowitz, 2015).

Concerns about Content Analysis of Social Media Content

While the rise of social and interactive technologies and the availability of large digital datasets present researchers with novel opportunities for content analysis, they also create several methodological challenges. One of the major issues confronting web content analysis is related to sampling. Although social media sites like Facebook and Twitter provide large quantities of user-generated data, the majority of these sites pose restrictions on public access to their data (boyd & Crawford, 2012; Morstatter, Pfeffer, Liu, & Carley, 2013). Researchers often struggle to obtain data from social media platforms and face problems of replicability of results (Ekbia et al., 2015; Murphy et al., 2014). Most social media companies provide omly part of their data that they themselves collect about their users through their own Application Programming Interface (API) commands (Manovich, 2012).

For example, with Facebook’s Graph API, researchers are allowed to collect only status updates and comments that Facebook users have made public (Stieglitz & Dang-Xuan, 2013). Similarly, Twitter makes only a small fraction of its data publicly retrievable through its public API. In comparison to the Twitter Firehose feed that allows access to all public tweets ever posted (Morstatter et al., 2013), Twitter’s sampled API service permits researchers who rely on it to retrieve a maximum of 10% of all public tweets (boyd & Crawford, 2012). Analyses of climate-change-related tweets have been conducted using both Twitter’s API (e.g., An et al., 2014; Newman, 2016; Pearce, Holmberg, Hellsten, & Nerlich, 2014) and the Firehose feed (e.g., Griffin et al., 2014; Jang & Hart, 2015). Although the comprehensive Firehose feed of tweets is most desirable, it is costly. Only a few companies, such as Crimson Hexagon and DiscoverText, have access to the Firehose data.

Twitter does not clearly document how the data are sampled into its API (boyd & Crawford, 2012; Morstatter et al., 2013) and the variability in data streams is prone to sampling biases that may threaten the representativeness of the content (Ekbia et al., 2015). A comparison between samples of tweets from Twitter’s API and Firehose data found that the representativeness of data from Twitter’s API compared to the representativeness of data from the Firehose depended on the size of the dataset and the type of analysis performed (Morstatter et al., 2013). Under most conditions, some degree of bias exists in the streamed data so researchers relying on Twitter’s API should be cautious about its representativeness.

The generalizability of the findings from content analysis of social media data to a broader population is another important issue. Users of social media sites do not necessarily represent the general population. American Twitter users, for example, tend to be younger, more educated, wealthier, and live in urban settings. Larger proportions of Black and Hispanic populations in the United States are on Twitter than White populations (Duggan, 2015). It can also be problematic to assume that “users” and “people” are equivalent or “account” and “user” are synonymous—some users may create multiple accounts, while some accounts may be used by multiple users (boyd & Crawford, 2012). In addition, social media users are hardly representative of the U.S. general public demographically and differ greatly across platforms. Pew Research Center (2013) found that the attitudes of Twitter users toward political issues often diverge from public opinion measured by national polls.

One study of Twitter discourse on climate change from 2008 to 2014 found that users tended to be concerned climate activists, as opposed to “deniers” (Cody, Reagan, Mitchell, Dodds, & Danforth, 2015). Notably, while many social media users are active participants who post and comment frequently, others are “listeners” who follow along but do not make visible contributions to the online community (Crawford, 2009; Preece, Nonnecke, & Andrews, 2004).

Although some researchers believe that content analysis of available social media messages hardly taps the opinions of all users, others have stressed the potential of analyses of social media posts for replacing traditional mass survey designs (Schober et al., 2016). Several studies have demonstrated that Twitter content can accurately forecast election outcomes and shows high correlation with answers to traditional public surveys (e.g., Ceron, Curini, Iacus, & Porro, 2014; Tumasjan, Sprenger, Sandner, & Welpe, 2011). Still, scholars should cautiously interpret the implications of social media content analysis.

Another concern is that the emergence of big data may lead researchers to discard theory-driven content analyses and rely on data-driven approaches. In this way, content analysts of big data are at risk of “inferential circularity” (Simon & Xenos, 2004)—a term that refers to observing significant patterns and then strategically constructing coding categories that elicit desired findings instead of constructing categories and then testing for patterns (boyd & Crawford, 2012; Chow-White & Green, 2013; Ekbia et al., 2015).

The relative ease of achieving statistically significant findings with hundreds of millions of social media data points may make researchers favor data-driven approaches (Ekbia et al., 2015). Enormous corpuses of data increase the likelihood that researchers will observe patterns that happen to be statistically significant but may not actually exist in, or represent the reality in, the general population (boyd & Crawford, 2012; Lewis, Zamith, & Hermida, 2013). It is important for scholars to keep in mind the relative vulnerability to such biased or unfounded inferences when the research design is driven solely by the availability and quantity of social media data.

Additionally, online media studies that rely on keywords to remove noise and sample content can sometimes suffer from inconsistent or incomplete search results (Hansen, 2003; Riffe et al., 2005). Using keywords locates relevant content from search engines, digital databases, or social network sites, but discrete sets of search terms can elicit very different results (Hansen, 2003). This may be particularly true for cross-field issues, like climate change, that are less clearly defined and have far-reaching implications.

On the other hand, extensive construction of word strings may identify irrelevant content and false-positive results (Riffe et al., 2005). Although development of detailed keyword descriptions may help solve this issue, it is likely to restrict the applicability of the search and eliminate potentially relevant data. Using a comprehensive and carefully constructed set of search strings is important and necessary to ensure the completeness of the sample of collected data and to give validity to the coded content, particularly for a socially complicated issue like climate change.

The Path Forward: Linking Content Analysis Data with Other Data

The future of computational communication research is not limited to automated content analysis but is largely dependent on coupling it with other research approaches. One recent trend is to incorporate content analysis with social network analysis that considers the relational dimension among social groups (e.g., Himelboim, Smith, & Shneiderman, 2013). The goal of this type of research design is to examine the network structure of organizations or community groups while digging into the discursive influence of different network groups. One example is Farrell’s (2015) work that uses a network approach to uncover the organizational structure of the climate-change countermovement network, and computational content analysis to examine semantic similarity between all written and oral climate-change or global-warming-related texts retrieved from climate contrarian organizations and news media and bureaucratic political sources (e.g., major news outlets, U.S. presidents, and occurrence on the floor of the U.S. Congress). The former part of this network analysis provides insights into the network formation of 4,556 individuals tied to 164 organizations that aim at promoting climate contrarian views. The latter part of the analysis shows the organizational influence of the climate-change countermovement in media and politics within the contrarian network. Notably, there has been criticism of social network analyses like the one described—particularly that they lack theoretical grounding and tend to reveal statistically significant relationships that are not representative of real-world phenomena (Borgatti, Brass, & Halgin, 2014; boyd & Crawford, 2012).

Other types of emerging approaches pair content analysis with non-user-generated data, such as survey findings, population parameters, or statistics provided by government and nongovernment sectors or system-generated data like georeferenced information. On the one hand, research procedures like survey designs or census data that include measurements of antecedent causes or subsequent effects can provide causal inferences, strengthen internal validity, and enhance social validity (Riffe et al., 2005). On the other hand, georeferenced social media data, such as geotagged tweets or Facebook posts with location information, can be used to strengthen the depth of content analysis (Crampton et al., 2013; Ghosh & Guha, 2013). Runge et al.’s study (2013), for example, examined the sentiments expressed in nanotechnology-related tweets. This analysis predicted the volume of nanotech-related tweets based on characteristics of each U.S. state’s population and the number of nanotech-related research centers and networks, which indicated geographic clustering of nanotechnology-related tweets.

Similarly, another study first performed sentiment analysis to categorize genetically modified organism (GMO)-related tweets expressing optimistic, pessimistic, or neutral valences at both national and state levels (Liang et al., 2015). A series of state-level factors like political characteristics and policy events were later compiled to predict the volume of GMO-related tweets that express pessimistic opinions by U.S. state using multivariate regression. These research methods can undoubtedly be applied to the area of climate-change research, given that the formation of climate-change opinion is highly influenced by geographic characteristic and other social factors.

Geotagged data may not be representative of all social media users, as only a small portion share their location, or, in some cases the location is unidentifiable or nonsensical (Ghosh & Guha, 2013). For this reason, researchers should exercise caution when interpreting the results of analyses using geocoded data. Despite the considerations of representativeness and generalizability, these tools do offer researchers the opportunity to develop systematic, reliable, and theory-driven content analysis of climate-change communication that minimizes limitations to data breadth and analysis depth.

Applying These Methods to Future Studies of Climate-Change Communication

While mailed surveys, telephone surveys, and in-person interviews used to be the gold standard for survey research, their costliness and people’s growing reluctance to participate in surveys have changed the efficacy of these modes of data collection (Pew Research Center, 2016). The prevalence of the Internet has opened up the opportunity to use online surveys to gauge opinions about different topics within a given target population. The research methods that take advantage of Internet platforms and the ubiquity of new technologies to assess public opinion and behaviors about climate change indicate two main research directions that researchers could use to facilitate future climate-change research in the Web2.0 environment: online surveys and content analysis of online messages.

There are numerous ways social science researchers can use online tools to survey people about their beliefs, attitudes, and behaviors, with rising use of mobile technologies as vehicles for data collection. Online surveys can be used as a means to test causal mechanisms of communication processes, or to examine the effects of policy intervention or campaign exposure by means of embedded experiments.

Researchers also increasingly mine social media content to assess users’ sentiment, issue awareness, opinions, and online behaviors. The seemingly infinite amount of digital content and the unprecedented ease of extracting data have motivated communication scholars to analyze these discussions. Emerging hybrid content-analysis tools also allow researchers to leverage the speed and efficiency of computer-based coding but to retain the sensitivity to latent validity that comes from human-based coding (Su et al., 2017), minimizing some of the pitfalls of each method on its own.

While data mining of social media content can be less expensive, less arduous, and more responsive to current events than survey research, some question whether content analyses of social media messages can replace more traditional (online) survey studies. While there is still no empirical evidence to answer this question, past literature suggests that we should think about this issue from four perspectives: participants’ motivation and understanding of their role in the research, ethical concerns, the nature of the data (in that responses would be either elicited in surveys or “found” in social media content), and practical and ethical aspects of data collection and analysis (Schober et al., 2016).

Survey respondents are usually selected by researchers and are aware they are participating in research, and therefore may adjust their answers as they see fit. Responses from survey participants also tend to be spontaneous and less thought through. On the other hand, social media users may be less cognizant of the possibility their content may be analyzed and (arguably) may be more prudent with what they share. Social media content is also subject to the perceived norms within the particular online community.

Researchers should keep in mind relevant ethical implications when using either online surveys or experiments or social media content analysis. While there are established protocols for survey research that require informed consent and the opportunity to opt out, social media analyses are relatively new and are not consistently regulated by ethics boards. Most social media users post items with little or no awareness of the possibility of being studied, a point that raised public discussion following a 2014 study in which hundreds of thousands of users’ Facebook feeds were manipulated to test for “emotional contagion” (Kramer, Guillory, & Hancock, 2014; Meyer, 2014).

On the technical side, it is important to consider how the nature of data varies and the kinds of inferences that can be drawn from each method. While probability-based online surveys rely on representative sampling of a population, analysis of social media content is not generalizable to a larger population, as users’ characteristics are not consistent with the general population. In addition, the sample units of surveys are typically individuals or organizations, while the sample units of social media analysis are posts or even the entire population of textual data (Su et al., 2017).

Researchers also must consider their financial and technical limitations in terms of data access, collection, and analysis. For instance, probability-based data collection is usually costly and survey design can be time intensive. However, the cost of social media data retrieval and extraction can be expensive if the study requires complete data (e.g., access to the Firehose). Analyses of big data also require high levels of technical skill.

There is no definitive answer to whether social media content analysis or online survey research is better than the other in gauging public opinion and behaviors. It is important for researchers to consider the specific features of each method as well as the subtypes of online surveys and social media sites, as the scope and quality of the data they provide can differ substantially (Schober et al., 2016). For example, although findings from studies using probability-based survey sampling are representative of the population, online surveys that use nonprobability panels or convenience samples may be problematic. Meanwhile, the participants or users and communicative dynamics tend to vary significantly across social media sites.

Finally, while Twitter is used as an exemplar here, it is important for researchers interested in analyses of social media to keep in mind the qualitative differences in the content shared (e.g., text compared to image content), audience composition, and privacy settings across platforms. These factors influence data retrieval and study design as well as the conclusions that can be drawn from the research.

Suggested Future Research for the Climate-Change Field

Undoubtedly, the ability to capture public opinion data through online surveys and content analysis of social media can be, and has been, applied to the issue of climate change. There are many opportunities, including longitudinal and cross-cultural research, testing the overall social and normative effects of communication processes on attitudes and beliefs, and incorporation of data at multiple levels and from multiple sources, such as regional climate consequences, survey responses, and social media content.

There is a significant amount of research documenting public opinion about climate change, but most of that work has been conducted in Western, industrialized nations (Leiserowitz, 2007). Research is still lacking in areas of the world that may be most vulnerable to the consequences of climate change (IPCC, 2014). The increasing worldwide use of Internet and mobile technologies could facilitate surveying public sectors who are thought to be hard to reach, but whose input is valuable and whose lives could be greatly affected by climate change. Along these lines, mobile or web technologies could be leveraged to recontact survey respondents, generating longitudinal data that can track changes in climate-change beliefs related to particular events or policies.

Given the prevalence and popularity of social media sites, studies of the social and normative impacts of communication cues transmitted via new media technologies are another promising area for future research. Because such cues can modify people’s beliefs about complex scientific topics like climate change (e.g., Spartz et al., 2017), these factors will likely continue to be important in climate-change public opinion research. Through the utilization of experiment-embedded online surveys and mobile-based surveys, researchers are able to examine the extent to which different features of social media sites influence opinion formation about, and behavioral engagement in, climate change.

The ability to access and share survey data obtained from publics across the world also provides opportunities for analysis of understudied regions or cross-national comparisons. Access to such data allows researchers to conduct cross-cultural comparisons of people’s attitudes about climate change depending on regional environmental conditions, climate or energy policies, or vulnerability, as well as personal beliefs and values related to climate change (e.g., Akin, 2015). Other data points, such as geolocation, photo-elicitation, or media exposure, could also be used to complement survey data, providing more robust indicators of personal attitudes and beliefs.

It is somewhat fortuitous that research methods are rapidly advancing at the same time the world faces this unprecedented global challenge. There is also a growing awareness that science controversies, including climate change, cannot be resolved by merely disseminating the scientific research to the public (Sarewitz, 2015). Instead, controversies are subject to a complex interplay between publics, scientists, and policymakers and are reliant on communication systems, many of which are online (Brossard & Scheufele, 2013). As summarized here, Internet-based research methods—particularly online surveys and social media content analysis—can be rigorously leveraged to record and analyze public views about climate change. However, as with any developing research methodology, scholars must exercise caution and be conscientious about the representativeness of their samples as they employ these tools and generalize results.

References

AAPOR. (2016). Standard definitions: Final dispositions of case codes and outcome rates for surveys (9th ed.). American Association of Public Opinion Research. Retrieved from http://www.aapor.org/AAPOR_Main/media/publications/Standard-Definitions20169theditionfinal.pdf.

Akin, H. (2015). The role of values, norms, and media use in public perceptions of climate change: A cross-cultural and U.S. analysis (Dissertation No. 3722959). Retrieved from ProQuest Dissertations & Theses Global.

An, X., Ganguly, A. R., Fang, Y., Scyphers, S. B., Hunter, A. M., & Dy, J. G. (2014). Tracking climate change opinions from Twitter data. Paper presented at the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, New York.Find this resource:

Anderson, A. A., Brossard, D., Scheufele, D. A., Xenos, M. A., & Ladwig, P. (2013). The “nasty effect”: Online incivility and risk perceptions of emerging technologies. Journal of Computer-Mediated Communication, 19(3), 373–387.Find this resource:

Anderson, M. (2015). Technology device ownership: 2015. Retrieved from Washington, DC: http://www.pewinternet.org/2015/10/29/technology-device-ownership-2015.Find this resource:

Annett, M., & Kondrak, G. (2008). A comparison of sentiment analysis techniques: Polarizing movie blogs. In S. Bergler (Ed.), Advances in artificial intelligence (Vol. 5032, pp. 25–35). Berlin: Springer.Find this resource:

Baker, R., Blumberg, S. J., Brick, J. M., Couper, M. P., Courtright, M., Dennis, J. M., … Zahs, D. (2010). Research synthesis: AAPOR report on online panels. Public Opinion Quarterly, 74(4), 711–781.Find this resource:

Baker, R., Brick, J. M., Bates, N. A., Battaglia, M., Couper, M. P., Dever, J. A., … Tourangeau, R. (2013). Summary report of the AAPOR task force on non-probability sampling. Journal of Survey Statistics and Methodology, 1(2), 90–143.Find this resource:

Baldwin, C., & Chandler, L. (2010). “At the water’s edge”: Community voices on climate change. Local Environment, 15(7), 637–649.Find this resource:

Barabas, J., & Jerit, J. (2010). Are survey experiments externally valid?American Political Science Review, 104(2), 226–242.Find this resource:

Bennett, N. J., & Dearden, P. (2013). A picture of change: Using photovoice to explore social and environmental change in coastal communities on the Andaman coast of Thailand. Local Environment, 18(9), 983–1001.Find this resource:

Berelson, B. (1952). Content analysis in communication research. New York: Free Press.Find this resource:

Berinsky, A. J., Huber, G. A., & Lenz, G. S. (2012). Evaluating online labor markets for experimental research: Amazon.com’s Mechanical Turk. Political Analysis, 20(3), 351–368.Find this resource:

Billett, S. (2009). Dividing climate change: Global warming in the Indian mass media. Climatic Change, 99(1–2), 1.Find this resource:

Boiy, E., & Moens, M.-F. (2009). A machine learning approach to sentiment analysis in multilingual web texts. Information Retrieval, 12(5), 526–558.Find this resource:

Bord, R. J., Fisher, A., & O’Connor, R. E. (1998). Public perceptions of global warming: United States and international perspectives. Climate Research, 11, 75.Find this resource:

Borgatti, S. P., Brass, D. J., & Halgin, D. S. (2014). Social network research: Confusions, criticisms, and controversies. In Daniel J. Brass, Giuseppe (JOE) Labianca, Ajay Mehra, Daniel S. Halgin, & Stephen P. Borgatti (Eds.), Contemporary perspectives on organizational social networks. Research in the Sociology of Organizations (Vol. 40, pp. 1–29). Bingley, U.K.: Emerald.Find this resource:

boyd, d. m., & Crawford, K. (2012). Critical questions for big data: Provocations for a cultural technological, and scholarly phenomenon. Information, Communication & Society, 15(5), 662–679.Find this resource:

boyd, d. m., & Ellison, N. B. (2008). Social network sites: Definition, history, and scholarship. Journal of Computer-Mediated Communication, 13(1), 210–230.Find this resource:

Brick, J. M. (2011). The future of survey sampling. Public Opinion Quarterly, 75(5), 872–888.Find this resource:

Brossard, D. (2013). New media landscapes and the science information consumer. Proceedings of the National Academy of Sciences, 110(3), 14096.Find this resource:

Brossard, D., & Scheufele, D. A. (2013). Science, new media, and the public. Science, 339(6115), 40–41.Find this resource:

Brossard, D., Shanahan, J., & McComas, K. (2004). Are issue-cycles culturally constructed? A comparison of French and American coverage of global climate change. Mass Communication and Society, 7(3), 359.Find this resource:

Brugger, A., Dessai, S., Devine-Wright, P., Morton, T. A., & Pidgeon, N. F. (2015). Psychological responses to the proximity of climate change. Nature Climate Change, 5(12), 1031–1037.Find this resource:

Buhrmester, M., Kwang, T., & Gosling, S. D. (2011). Amazon’s Mechanical Turk: A new source of inexpensive, yet high-quality, data? Perspectives on Psychological Science, 6(1), 3–5.Find this resource:

Callegaro, M., Baker, R., Bethlehem, J., Göritz, A. S., Krosnick, J. A., & Lavrakas, P. J. (2014). Online panel research. In M. Callegaro, R. Baker, J. Bethlehem, A. S. Göritz, J. A. Krosnick, & P. J. Lavrakas (Eds.), Online panel research: A data quality perspective (pp. 1–22). Chichester, U.K.: John Wiley.Find this resource:

Capstick, S., Whitmarsh, L., Poortinga, W., Pidgeon, N., & Upham, P. (2015). International trends in public perceptions of climate change over the past quarter century. Wiley Interdisciplinary Reviews: Climate Change, 6(1), 35–61.Find this resource:

Carvalho, A., & Burgess, J. (2005). Cultural circuits of climate change in U.K. Broadsheet newspapers, 1985–2003. Risk Analysis, 25(6), 1457–1469.Find this resource:

Ceron, A., Curini, L., Iacus, S. M., & Porro, G. (2014). Every tweet counts? How sentiment analysis of social media can improve our knowledge of citizens’ political preferences with an application to Italy and France. New Media & Society, 16(2), 340–358.Find this resource:

Chaix, B., Méline, J., Duncan, S., Merrien, C., Karusisi, N., Perchoux, C., … Kestens, Y. (2013). GPS tracking in neighborhood and health studies: A step forward for environmental exposure assessment, a step backward for causal inference?Health & Place, 21, 46–51.Find this resource:

Chow-White, P. A., & Green, S. E. G. (2013). Data mining difference in the age of big data: Communication and the social shaping of genome technologies from 1998 to 2007. International Journal of Communication, 7, 556–583.Find this resource:

Cody, E. M., Reagan, A. J., Mitchell, L., Dodds, P. S., & Danforth, C. M. (2015). Climate change sentiment on Twitter: An unsolicited public opinion poll. PLoS ONE (Public Library of Science), 10(8), e0136092.Find this resource:

Couper, M. P. (2000). Web surveys: A review of issues and approaches. Public Opinion Quarterly, 64(4), 464–494.Find this resource:

Crampton, J. W., Graham, M., Poorthuis, A., Shelton, T., Stephens, M., Wilson, M. W., & Zook, M. (2013). Beyond the geotag: Situating “big data” and leveraging the potential of the geoweb. Cartography and Geographic Information Science, 40(2), 130–139.Find this resource:

Crawford, K. (2009). Following you: Disciplines of listening in social media. Continuum: Journal of Media & Cultural Studies, 23(4), 532–533.Find this resource:

Das, J., Bacon, W., & Zaman, A. (2009). Covering the environmental issues and global warming in Delta land: A study of three newspapers. Pacific Journalism Review, 15(2), 10–33.Find this resource:

de Bruijne, M., & Oudejans, M. (2015). Online surveys and the burden of mobile responding. In U. Engel (Ed.), Survey measurements: Techniques, data quality and sources of error (pp. 130–145). Frankfurt: Campus Verlag.Find this resource:

Dixon, G. N., McKeever, B. W., Holton, A. E., Clarke, C., & Eosco, G. (2015). The power of a picture: Overcoming scientific misinformation by communicating weight-of-evidence information with visual exemplars. Journal of Communication, 65(4), 639–659.Find this resource:

Driscoll, K., & Thorson, K. (2015). Searching and clustering methodologies: Connecting political communication content across platforms. The ANNALS of the American Academy of Political and Social Science, 659(1), 134–148.Find this resource:

Duggan, M. (2015). Mobile messaging and social media 2015. Pew Research Center. Retrieved from http://www.pewinternet.org/2015/08/19/mobile-messaging-and-social-media-2015/.

Dunlap, R. E., & McCright, A. M. (2008). A widening gap: Republican and democratic views on climate change. Environment: Science and Policy for Sustainable Development, 50(5), 26–35.Find this resource:

Egan, P. J., & Mullin, M. (2012). Turning personal experience into political attitudes: The effect of local weather on Americans’ perceptions about global warming. Journal of Politics, 74(03), 796–809.Find this resource:

Eichmeier, A., Wirz, C., Brossard, D., Scheufele, D., Xenos, M., & Stenhouse, N. (2016). Has Pope Francis changed the framing of climate change discourse online? Paper presented at the 2016 American Association for the Advancement of Science Annual Meeting, Washington, DC.Find this resource:

Ekbia, H., Mattioli, M., Kouper, I., Arave, G., Ghazinejad, A., Bowman, T., … Sugimoto, C. R. (2015). Big data, bigger dilemmas: A critical review. Journal of the Association for Information Science and Technology, 66(8), 1523–1545.Find this resource:

Farrell, J. (2015). Network structure and influence of the climate change counter-movement. Nature Climate Change, advance online publication.Find this resource:

Fricker, S., Galesic, M., Tourangeau, R., & Yan, T. (2005). An experimental comparison of web and telephone surveys. Public Opinion Quarterly, 69(3), 370–392.Find this resource:

Ghosh, D., & Guha, R. (2013). What are we “tweeting” about obesity? Mapping tweets with topic modeling and geographic information system. Cartography and Geographic Information Science, 40(2), 90–102.Find this resource:

Gordon, J. C., Deines, T., & Havice, J. (2010). Global warming coverage in the media: Trends in a Mexico City newspaper. Science Communication, 32(2), 143–170.Find this resource:

Griffin, K. S., Yeo, S. K., Handlos, Z., Karambelas, A., Su, L. Y.-F., Doolen, J., & Brossard, D. (2014). The influence of weather on Twitter discourse of #climatechange and #globalwarming. Paper presented at the 27th Conference on Severe Local Storms hosted by American Meteorological Society, Madison, WI.Find this resource:

Hansen, K. A. (2003). Using databases for content analysis. In G. H. Stempel III, D. H. Weaver, & G. C. Wilhoit (Eds.), Mass communication research and theory (pp. 220–230). Boston: Allyn & Bacon.Find this resource:

Himelboim, I., Smith, M., & Shneiderman, B. (2013). Tweeting apart: Applying network analysis to detect selective exposure clusters in twitter. Communication Methods and Measures, 7(3–4), 195–223.Find this resource:

Hodges, H. E., & Stocking, G. (2016). A pipeline of tweets: Environmental movements’ use of Twitter in response to the Keystone XL pipeline. Environmental Politics, 25(2), 223–247.Find this resource:

Holsti, O. R. (1969). Content analysis for the social sciences and humanities. Reading, MA: Addison-Wesley.Find this resource:

Hopke, J. E., & Simis, M. (2015). Discourse over a contested technology on Twitter: A case study of hydraulic fracturing. Public Understanding of Science.Find this resource:

Howe, P. D., Mildenberger, M., Marlon, J. R., & Leiserowitz, A. (2015). Geographic variation in opinions on climate change at state and local scales in the USA. Nature Climate Change, 5(6), 596–603.Find this resource:

International Telecommunication Union. (2015). ITU ICT facts and figures. Retrieved from http://www.itu.int/en/ITU-D/Statistics/Pages/facts/default.aspx.

IPCC. (2014). Climate change 2014: Impacts, adaptation, and vulnerability. Cambridge, U.K., and New York: Cambridge University Press. Retrieved from: http://www.ipcc.ch/report/ar5/wg2/Find this resource:

Jamieson, K. H., & Cappella, J. N. (2008). Echo chamber: Rush Limbaugh and the conservative media establishment. New York: Oxford University Press.Find this resource:

Jamieson, K. H., & Hardy, B. W. (2014). Leveraging scientific credibility about Arctic sea ice trends in a polarized political environment. Proceedings of the National Academy of Sciences of the United States of America, 111, 13598–13605.Find this resource:

Jang, S. M., & Hart, P. S. (2015). Polarized frames on “climate change” and “global warming” across countries and states: Evidence from Twitter big data. Global Environmental Change, 32, 11–17.Find this resource:

Kahan, D., Peters, E., Wittlin, M., Slovic, P., Ouellette, L., Braman, D., & Mandel, G. (2012). The polarizing impact of science literacy and numeracy on perceived climate change risks. Nature Climate Change, 2(10), 732–735.Find this resource:

Kaplowitz, M. D., Hadlock, T. D., & Levine, R. (2004). A comparison of web and mail survey response rates. Public Opinion Quarterly, 68(1), 94–101.Find this resource:

Keeter, S. (2015). Methods can matter: Where web surveys produce different results than phone interviews. FactTank. Washington, DC: Pew Research Center.Find this resource:

Kerlinger, F. N., & Lee, H. B. (1999). Foundations of behavioral research (4th ed.). Belmont, CA: Wadsworth.Find this resource:

Kramer, A. D. I., Guillory, J. E., & Hancock, J. T. (2014). Experimental evidence of massive-scale emotional contagion through social networks. Proceedings of the National Academy of Sciences, 111(24), 8788–8790.Find this resource:

Krippendorff, K. (2012). Content analysis: An introduction to its methodology (3d ed.). Thousand Oaks, CA: SAGE.Find this resource:

Krosnick, J. A., Presser, S., Fealing, K. H., & Ruggles, S. (2012). The future of survey research: Challenges and opportunities. The report was presented by the National Science Foundation Advisory Committee for the Social, Behavioral and Economic Sciences Subcommittee on Advancing SBE Survey Research. Retrieved from http://www.nsf.gov/sbe/AC_Materials/The_Future_of_Survey_Research.pdf.Find this resource:

Ladwig, P., Anderson, A. A., Brossard, D., Scheufele, D. A., & Shaw, B. (2010). Narrowing the nano discourse?Materials Today, 13(5), 52.Find this resource:

Lee, T. M., Markowitz, E. M., Howe, P. D., Ko, C.-Y., & Leiserowitz, A. A. (2015). Predictors of public climate change awareness and risk perception around the world. Nature Climate Change, 5, 1014–1020.Find this resource:

Leiserowitz, A. (2007). International public opinion, perception, and understanding of global climate change. Human Development Report, 2007/2008. Human Development Report Office Occasional Paper.Find this resource:

Leiserowitz, A., Maibach, E., Roser-Renouf, C., Feinberg, G., & Howe, P. (2012). The climate change in the American mind series: Americans’ global warming beliefs and attitudes in September, 2012. New Haven, CT: Yale University and George Mason University. Yale Project on Climate Change Communication. Retrieved from http://environment.yale.edu/climate-communication/files/Global-Warming-Religion-March-2015.pdf.Find this resource:

Lewis, S. C., Zamith, R., & Hermida, A. (2013). Content analysis in an era of big data: A hybrid approach to computational and manual methods. Journal of Broadcasting & Electronic Media, 57(1), 34–52.Find this resource:

Li, N., Akin, H., Su, L. Y.-F., Xenos, M. A., Scheufele, D. A., & Brossard, D. (2016). Tweeting disaster: A content analysis of online discourse about nuclear power in the wake of the Fukushima Daiichi nuclear accident. Journal of Science Communication, 15(05), A02.Find this resource:

Li, Y., Johnson, E. J., & Zaval, L. (2011). Local warming: Daily temperature change influences belief in global warming. Psychological Science.Find this resource:

Liang, X., Runge, K. K., Wirz, C., Brossard, D., Scheufele, D. A., & Xenos, M. (2015, October). Tweeting GMOs: An analysis of public discourse surrounding genetically modified organisms in social media environments. Paper presented at the annual convention of the Association for Politics and the Life Sciences, Madison, Wisconsin.Find this resource:

Lineman, M., Do, Y., Kim, J. Y., & Joo, G.-J. (2015). Talking about climate change and global warming. PLoS ONE (Public Library of Science), 10(9), e0138996.Find this resource:

Link, M. W., Murphy, J., Schober, M. F., Buskirk, T. D., Hunter Childs, J., & Langer Tesfaye, C. (2014). Mobile technologies for conducting, augmenting, and potentially replacing surveys: Report of the AAPOR task force on emerging technologies in public opinion research. American Association on Public Opinion Research.Find this resource:

Liu, B., & Zhang, L. (2012). A survey of opinion mining and sentiment analysis. In C. C. Aggarwal & C. Zhai (Eds.), Mining text data (pp. 415–463). New York: Springer.Find this resource:

Lorenzoni, I., Leiserowitz, A., de Franca Doria, M., Poortinga, W., & Pidgeon, N. F. (2006). Cross-national comparisons of image associations with “global warming” and “climate change” among laypeople in the United States of America and Great Britain. Journal of Risk Research, 9(3), 265–281.Find this resource:

Lorenzoni, I., & Pidgeon, N. F. (2006). Public views on climate change: European and USA perspectives. Climatic Change, 77, 73.Find this resource:

Lynn, P., & Kaminska, O. (2013). The impact of mobile phones on survey measurement error. Public Opinion Quarterly, 77(2).Find this resource:

Manovich, L. (2012). Trending: The promises and the challenges of big social data. In M. K. Gold (Ed.), Debates in the digital humanities (pp. 460–475). Minneapolis: University of Minnesota Press.Find this resource:

Mason, W., & Suri, S. (2011). Conducting behavioral research on Amazon’s Mechanical Turk. Behavior Research Methods, 44(1), 1–23.Find this resource:

McCright, A. M., Dunlap, R. E., & Marquart-Pyatt, S. T. (2016). Political ideology and views about climate change in the European Union. Environmental Politics, 25(2), 338–358.Find this resource:

McMillan, S. J. (2000). The microscope and the moving target: The challenge of applying content analysis to the world wide web. Journalism & Mass Communication Quarterly, 77(1), 80–98.Find this resource:

Meyer, R. (2014). Everything we know about Facebook's secret mood manipulation experiment. The Atlantic. Retrieved from http://www.theatlantic.com/technology/archive/2014/06/everything-we-know-about-facebooks-secret-mood-manipulation-experiment/373648/.Find this resource:

Morstatter, F., Pfeffer, J., Liu, H., & Carley, K. M. (2013). Is the sample good enough? Comparing data from Twitter’s streaming API with Twitter’s Firehose. In the Proceedings of the Seventh International AAAI Conference on Weblogs and Social Media, Cambridge, MA.Find this resource:

Murphy, J., Link, M. W., Hunter Childs, J., Langer Tesfaye, C., Dean, E., Stern, M., … Harwood, P. (2014). Social media in public opinion research: Report of the AAPOR task force on emerging technologies in public opinion research. American Association on Public Opinion Research.Find this resource:

National Science Board. (2016). Science and engineering indicators 2016. National Science Foundation. Retrieved from http://www.nsf.gov/statistics/2016/nsb20161/.Find this resource:

Newman, T. P. (2016). Tracking the release of IPCC AR5 on Twitter: Users, comments, and sources following the release of the working group 1 summary for policymakers. Public Understanding of Science.Find this resource:

Nisbet, M. C., & Myers, T. (2007). Trends: Twenty years of public opinion about global warming. Public Opinion Quarterly, 71(3), 444.Find this resource:

Olausson, U. (2010). Towards a European identity? The news media and the case of climate change. European Journal of Communication, 25(2), 138.Find this resource:

Pang, B., & Lee, L. (2008). Opinion mining and sentiment analysis. Foundations and Trends in Information Retrieval, 2(1–2), 1–135.Find this resource:

Pearce, W., Holmberg, K., Hellsten, I., & Nerlich, B. (2014). Climate change on Twitter: Topics, communities and conversations about the 2013 IPCC working group 1 report. PLoS ONE (Public Library of Science), 9(4), e94785.Find this resource:

Pew Research Center. (2013). Twitter reaction to events often at odds with overall public opinion. Retrieved from http://www.pewresearch.org/2013/03/04/twitter-reaction-to-events-often-at-odds-with-overall-public-opinion/.

Pew Research Center. (2015). Social media update 2014. Retrieved from http://www.pewinternet.org/2015/01/09/social-media-update-2014/.

Pew Research Center. (2016). U.S. Survey research: Collecting survey data. Retrieved from http://pewrsr.ch/1A3Bbrf.

Pew Research Internet Project. (2013). Teens, social media, and privacy. Retrieved from http://www.pewinternet.org/2013/05/21/teens-social-media-and-privacy/.

Preece, J., Nonnecke, B., & Andrews, D. (2004). The top five reasons for lurking: Improving community experiences for everyone. Computers in Human Behavior, 20(2), 201–223.Find this resource:

Reips, U.-D. (2002). Standards for internet-based experimenting. Experimental Psychology, 49(4), 243–256.Find this resource:

Riffe, D., Lacy, S., & Fico, F. G. (2005). Analyzing media messages: Using quantitative analysis in research. Mahwah, NJ: Lawrence Erlbaum Associates.Find this resource:

Runge, K. K., Yeo, S. K., Cacciatore, M. A., Scheufele, D. A., Brossard, D., Xenos, M. A., … Su, L. Y.-F. (2013). Tweeting nano: How public discourses about nanotechnology develop in social media environments. Journal of Nanoparticle Research, 15(1381).Find this resource:

Saloranta, T. M. (2001). Post-normal science and the global climate change issue. Climatic Change, 50(4), 395–404.Find this resource:

Sarewitz, D. (2015). Science can’t solve it. Nature, 522(7557), 413–414.Find this resource:

Scheufele, D. A. (2010). Survey research. In S. H. Priest (Ed.), Encyclopedia of science and technology communication. Thousand Oaks, CA: SAGE.Find this resource:

Schmidt, A., Ivanova, A., & Schäfer, M. S. (2013). Media attention for climate change around the world: A comparative analysis of newspaper coverage in 27 countries. Global Environmental Change, 23(5), 1233–1248.Find this resource:

Schober, M. F., Pasek, J., Guggenheim, L., Lampe, C., & Conrad, F. G. (2016). Research synthesis: Social media analyses for social measurement. Public Opinion Quarterly.Find this resource:

Segerberg, A., & Bennett, W. L. (2011). Social media and the organization of collective action: Using Twitter to explore the ecologies of two climate change protests. The Communication Review, 14(3), 197–215.Find this resource:

Sherren, K., & Verstraten, C. (2013). What can photo-elicitation tell us about how maritime farmers perceive wetlands as climate changes?Wetlands, 33(1), 65–81.Find this resource:

Simon, A. F., & Xenos, M. (2004). Dimensional reduction of world-frequency data as a substitute for intersubjective content analysis. Political Analysis, 12(1), 63–75.Find this resource:

Sjøvaag, H., & Stavelin, E. (2012). Web media and the quantitative content analysis: Methodological challenges in measuring online news content. Convergence: The International Journal of Research into New Media Technologies.Find this resource:

Smith, A. (2012). The best (and worst) of mobile connectivity. Retrieved from Washington, DC, http://pewinternet.org/Reports/2012/Best-Worst-Mobile.aspx.Find this resource:

Spartz, J. T., Su, L. Y.-F., Griffin, R., Brossard, D., & Dunwoody, S. (2017). YouTube, social norms and perceived salience of climate change in the American mind. Environmental Communication, 11(1), 1–16.Find this resource:

Stieglitz, S., & Dang-Xuan, L. (2013). Social media and political communication: A social media analytics framework. Social Network Analysis and Mining, 3(4), 1277–1291.Find this resource:

Strohmaier, M., & Wagner, C. (2014). Computational social science for the World Wide Web. Intelligent Systems, IEEE, 29(5), 84–88.Find this resource:

Su, L. Y.-F., Akin, H., Brossard, D., Scheufele, D. A., & Xenos, M. A. (2015). Science news consumption patterns and their implications for public understanding of science. Journalism & Mass Communication Quarterly, 92(3).Find this resource:

Su, L. Y.-F., Cacciatore, M. A., Liang, X., Brossard, D., Scheufele, D. A., & Xenos, M. A. (2017). Analyzing public sentiments online: Combining human- and computer-based content analysis. Information, Communication & Society, 20(3), 406–427.Find this resource:

The World Bank. (2016). World development indicators. Retrieved from http://data.worldbank.org/topic/infrastructure.

Thurman, N., & Schifferes, S. (2012). The future of personalization at news websites. Journalism Studies, 13(5–6), 775–790.Find this resource:

Tumasjan, A., Sprenger, T. O., Sandner, P. G., & Welpe, I. M. (2011). Election forecasts with Twitter: How 140 characters reflect the political landscape. Social Science Computer Review, 29, 402–418.Find this resource:

Twitter. (2017). Twitter usage/company facts. Retrieved from https://about.twitter.com/company.

Van House, N. A. (2006). Interview viz: Visualization-assisted photo elicitation. Paper presented at the CHI’06 Extended Abstracts on Human Factors in Computing Systems, Montreal, Quebec, Canada.Find this resource:

Wang, C. C., & Burris, M. A. (1997). Photovoice: Concept, methodology, and use for participatory needs assessment. Health Education & Behavior, 24(3), 369–387.Find this resource:

Wang, C. C., Yi, W. K., Tao, Z. W., & Carovano, K. (1998). Photovoice as a participatory health promotion strategy. Health Promotion International, 13(1), 75–86.Find this resource:

Wang, H., Can, D., Kazemzadeh, A., Bar, F., & Narayanan, S. (2012). A system for real-time Twitter sentiment analysis of 2012 US presidential election cycle. In the Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (pp. 115–120), Jeju, Republic of Korea, July 8–14, 2012.Find this resource:

Whitmarsh, L. (2011). Scepticism and uncertainty about climate change: Dimensions, determinants and change over time. Global Environmental Change, 21(2), 690–700.Find this resource:

Wilson Rowe, E. (2009). Who is to blame? Agency, causality, responsibility and the role of experts in Russian framings of global climate change. Europe-Asia Studies, 61(4), 593.Find this resource:

Wong-Parodi, G., Fischhoff, B., & Strauss, B. (2014). A method to evaluate the usability of interactive climate change impact decision aids. Climatic Change, 126(3), 485–493.Find this resource:

Wright, K. B. (2006). Researching Internet-based populations: Advantages and disadvantages of online survey research, online questionnaire authoring software packages, and web survey services. Journal of Computer-Mediated Communication, 10(3).Find this resource:

Ye, C., Fulton, J., & Tourangeau, R. (2011). More positive or more extreme? A meta-analysis of mode differences in response choice. Public Opinion Quarterly, 75(2), 349–365.Find this resource:

Yeo, S. K., Xenos, M., Brossard, D., & Scheufele, D. A. (2014). Disconnected discourses: How popular discourse about nanotechnology is missing the point. Materials Today, 17(2), 48–49.Find this resource:

Zaval, L., Keenan, E. A., Johnson, E. J., & Weber, E. U. (2014). How warm days increase belief in global warming. Nature Climate Change, 4(2), 143–147.Find this resource:

Zehr, S. C. (2000). Public representations of scientific uncertainty about global climate change. Public Understanding of Science, 9(2), 85.Find this resource: