You are looking at 1-10 of 203 articles for:Clear All
Benjamin Mark Sanderson
Long-term planning for many sectors of society—including infrastructure, human health, agriculture, food security, water supply, insurance, conflict, and migration—requires an assessment of the range of possible futures which the planet might experience. Unlike short-term forecasts for which validation data exists for comparing forecast to observation, long-term forecasts have almost no validation data. As a result, researchers must rely on supporting evidence to make their projections. A review of methods for quantifying the uncertainty of climate predictions is given. The primary tool for quantifying these uncertainties are climate models, which attempt to model all the relevant processes that are important in climate change. However, neither the construction nor calibration of climate models is perfect, and therefore the uncertainties due to model errors must also be taken into account in the uncertainty quantification.
Typically, prediction uncertainty is quantified by generating ensembles of solutions from climate models to span possible futures. For instance, initial condition uncertainty is quantified by generating an ensemble of initial states that are consistent with available observations and then integrating the climate model starting from each initial condition. A climate model is itself subject to uncertain choices in modeling certain physical processes. Some of these choices can be sampled using so-called perturbed physics ensembles, whereby uncertain parameters or structural switches are perturbed within a single climate model framework. For a variety of reasons, there is a strong reliance on so-called ensembles of opportunity, which are multi-model ensembles (MMEs) formed by collecting predictions from different climate modeling centers, each using a potentially different framework to represent relevant processes for climate change. The most extensive collection of these MMEs is associated with the Coupled Model Intercomparison Project (CMIP). However, the component models have biases, simplifications, and interdependencies that must be taken into account when making formal risk assessments. Techniques and concepts for integrating model projections in MMEs are reviewed, including differing paradigms of ensembles and how they relate to observations and reality. Aspects of these conceptual issues then inform the more practical matters of how to combine and weight model projections to best represent the uncertainties associated with projected climate change.
Fedor Mesinger, Miodrag Rančić, and R. James Purser
The astonishing development of computer technology since the mid-20th century has been accompanied by a corresponding proliferation in the numerical methods that have been developed to improve the simulation of atmospheric flows. This article reviews some of the numerical developments concern the ongoing improvements of weather forecasting and climate simulation models. Early computers were single-processor machines with severely limited memory capacity and computational speed, requiring simplified representations of the atmospheric equations and low resolution. As the hardware evolved and memory and speed increased, it became feasible to accommodate more complete representations of the dynamic and physical atmospheric processes. These more faithful representations of the so-called primitive equations included dynamic modes that are not necessarily of meteorological significance, which in turn led to additional computational challenges. Understanding which problems required attention and how they should be addressed was not a straightforward and unique process, and it resulted in the variety of approaches that are summarized in this article. At about the turn of the century, the most dramatic developments in hardware were the inauguration of the era of massively parallel computers, together with the vast increase in the amount of rapidly accessible memory that the new architectures provided. These advances and opportunities have demanded a thorough reassessment of the numerical methods that are most successfully adapted to this new computational environment. This article combines a survey of the important historical landmarks together with a somewhat speculative review of methods that, at the time of writing, seem to hold out the promise of further advancing the art and science of atmospheric numerical modeling.
Hail has been identified as the largest contributor to insured losses from thunderstorms globally, with losses costing the insurance industry billions of dollars each year. Yet, of all precipitation types, hail is probably subject to the largest uncertainties. Some might go so far as to argue that observing and forecasting hail is as difficult, if not more difficult, than is forecasting tornadoes. The reasons why hail is challenging are many and varied and reflected by the fact that hailstones display a wide variety of shapes, sizes and internal structures. There is also an important clue in this diversity—nature is telling us that hail can grow by following a wide variety of trajectories within thunderstorms, each having a unique set of conditions. It is because of this complexity that modeling hail growth and forecasting size is so challenging. Consequently, it is understandable that predicting the occurrence and size of hail seems an impossible task.
Through persistence, ingenuity and technology, scientists have made progress in understanding the key ingredients and processes at play. Technological advances mean that we can now, with some confidence, identify those storms that very likely contain hail and even estimate the maximum expected hail size on the ground hours in advance. Even so, there is still much we need to learn about the many intriguing aspects of hail growth.
Ricardo García-Herrera and David Barriopedro
The Mediterranean is a semi-enclosed sea surrounded by Europe to the north, Asia to the east, and Africa to the south. It covers an area of approximately 2.5 million km2, between 30–46 °N latitude and 6 °W and 36 °E longitude. The term Mediterranean climate is applied beyond the Mediterranean region itself and has been used since the early 20th century to classify other regions of the world, such as California or South Africa, usually located in the 30º–40º latitudinal band. The Mediterranean climate can be broadly characterized by warm to hot dry summers and mild wet winters. However, this broad picture hides important variations, which can be explained through the existence of two geographical gradients: North/South, with a warmer and drier south, and West/East, more influenced by Atlantic/Asian circulation.
The region is located at a crossroad between the mid-latitudes and the subtropical regimes. Thus, small changes in the Atlantic storm track may lead to dramatic changes in the precipitation of the northwestern area of the basin. The variability of the descending northern branch of the Hadley cell influences the climate of the southern margin, while the eastern border climate is conditioned by the Siberian High in winter and the Indian Summer Monsoon during summer. All these large-scale factors are modulated by the complex orography of the region, the contrasting albedo, and the moisture and heat supplied by the Mediterranean Sea. The interactions occurring among all these factors lead to a complex picture with some relevant phenomena characteristic of the Mediterranean region, such as heatwaves and droughts, Saharan dust intrusions, or specific types of cyclogenesis.
Climate model projections generally agree in characterizing the region as a climate change hotspot, considering that it is one of the areas of the globe likely to suffer pronounced climate changes. Anthropogenic influences are not new, since the region is densely populated and is the home of some the oldest civilizations on Earth. This has produced multiple and continuous modifications in the land cover, with measurable impacts on climate that can be traced from the rich available documentary evidence and high-resolution natural proxies.
Climate and simulation have become interwoven concepts during the past decades because, on the one hand, climate scientists shouldn’t experiment with real climate and, on the other hand, societies want to know how climate will change in the next decades. Both in-silico experiments for a better understanding of climatic processes as well as forecasts of possible futures can be achieved only by using climate models. The article investigates possibilities and problems of model-mediated knowledge for science and societies. It explores historically how climate became a subject of science and of simulation, what kind of infrastructure is required to apply models and simulations properly, and how model-mediated knowledge can be evaluated. In addition to an overview of the diversity and variety of models in climate science, the article focuses on quasiheuristic climate models, with an emphasis on atmospheric models.
Ole Bøssing Christensen and Erik Kjellström
The ecosystems and the societies of the Baltic Sea region are quite sensitive to fluctuations in climate, and therefore it is expected that anthropogenic climate change will affect the region considerably. With numerical climate models, a large amount of projections of meteorological variables affected by anthropogenic climate change have been performed in the Baltic Sea region for periods reaching the end of this century.
Existing global and regional climate model studies suggest that:
• The future Baltic climate will get warmer, mostly so in winter. Changes increase with time or increasing emissions of greenhouse gases. There is a large spread between different models, but they all project warming. In the northern part of the region, temperature change will be higher than the global average warming.
• Daily minimum temperatures will increase more than average temperature, particularly in winter.
• Future average precipitation amounts will be larger than today. The relative increase is largest in winter. In summer, increases in the far north and decreases in the south are seen in most simulations. In the intermediate region, the sign of change is uncertain.
• Precipitation extremes are expected to increase, though with a higher degree of uncertainty in magnitude compared to projected changes in temperature extremes.
• Future changes in wind speed are highly dependent on changes in the large-scale circulation simulated by global climate models (GCMs). The results do not all agree, and it is not possible to assess whether there will be a general increase or decrease in wind speed in the future.
• Only very small high-altitude mountain areas in a few simulations are projected to experience a reduction in winter snow amount of less than 50%. The southern half of the Baltic Sea region is projected to experience significant reductions in snow amount, with median reductions of around 75%.
Florian Sévellec and Bablu Sinha
The Atlantic meridional overturning circulation (AMOC) is a large, basin-scale circulation located in the Atlantic Ocean that transports climatically important quantities of heat northward. It can be described schematically as a northward flow in the warm upper ocean and a southward return flow at depth in much colder water. The heat capacity of a layer of 2 m of seawater is equivalent to that of the entire atmosphere; therefore, ocean heat content dominates Earth’s energy storage. For this reason and because of the AMOC’s typically slow decadal variations, the AMOC regulates North Atlantic climate and contributes to the relatively mild climate of Europe. Hence, predicting AMOC variations is crucial for predicting climate variations in regions bordering the North Atlantic. Similar to weather predictions, climate predictions are based on numerical simulations of the climate system. However, providing accurate predictions on such long timescales is far from straightforward. Even in a perfect model approach, where biases between numerical models and reality are ignored, the chaotic nature of AMOC variability (i.e., high sensitivity to initial conditions) is a significant source of uncertainty, limiting its accurate prediction.
Predictability studies focus on factors determining our ability to predict the AMOC rather than actual predictions. To this end, processes affecting AMOC predictability can be separated into two categories: processes acting as a source of predictability (periodic harmonic oscillations, for instance) and processes acting as a source of uncertainty (small errors that grow and significantly modify the outcome of numerical simulations). To understand the former category, harmonic modes of variability or precursors of AMOC variations are identified. On the other hand, in a perfect model approach, the sources of uncertainty are characterized by the spread of numerical simulations differentiated by the application of small differences to their initial conditions. Two alternative and complementary frameworks have arisen to investigate this spread. The pragmatic framework corresponds to performing an ensemble of simulations, by imposing a randomly chosen small error on the initial conditions of individual simulations. This allows a probabilistic approach and to statistically characterize the importance of the initial condition by evaluating the spread of the ensemble. The theoretical framework uses stability analysis to identify small perturbations to the initial conditions, which are conducive to significant disruption of the AMOC.
Beyond these difficulties in assessing the predictability, decadal prediction systems have been developed and tested through a range of hindcasts. The inherent difficulties of operational forecasts span from developing efficient initialization methods to setting accurate radiative forcing to correcting for model drift and bias, all these improvements being estimated and validated through a range of specifically designed skill metrics.
Sharon E. Nicholson
This article provides an in-depth look at all aspects of the climate of the Sahel, including the pervasive dust in the Sahelian atmosphere. Emphasis is on two aspects: West African monsoon and the region’s rainfall regime. This includes an overview of the prevailing atmospheric circulation at the surface and aloft and the relationship between this and the rainfall regime. Aspects of the rainfall regime that are considered include its unique characteristics, its changes over time, the storm systems that produce rainfall, and factors governing its variability on interannual and decadal time scales. Variability is examined on three time scales: millennial (as seen is the paleo records of the last 20,000 years), multi-decadal (as seen over the last few centuries as seen from proxy data and, more recently, in observations), and interannual to decadal (quantified by observations from the late 19th century and onward). A unique feature of Sahel climate is that is rainfall regime is perhaps the most sensitive in the world and this sensitivity is apparent on all of these time scales.
Christopher K. Wikle
The climate system consists of interactions between physical, biological, chemical, and human processes across a wide range of spatial and temporal scales. Characterizing the behavior of components of this system is crucial for scientists and decision makers. There is substantial uncertainty associated with observations of this system as well as our understanding of various system components and their interaction. Thus, inference and prediction in climate science should accommodate uncertainty in order to facilitate the decision-making process. Statistical science is designed to provide the tools to perform inference and prediction in the presence of uncertainty. In particular, the field of spatial statistics considers inference and prediction for uncertain processes that exhibit dependence in space and/or time. Traditionally, this is done descriptively through the characterization of the first two moments of the process, one expressing the mean structure and one accounting for dependence through covariability.
Historically, there are three primary areas of methodological development in spatial statistics: geostatistics, which considers processes that vary continuously over space; areal or lattice processes, which considers processes that are defined on a countable discrete domain (e.g., political units); and, spatial point patterns (or point processes), which consider the locations of events in space to be a random process. All of these methods have been used in the climate sciences, but the most prominent has been the geostatistical methodology. This methodology was simultaneously discovered in geology and in meteorology and provides a way to do optimal prediction (interpolation) in space and can facilitate parameter inference for spatial data. These methods rely strongly on Gaussian process theory, which is increasingly of interest in machine learning. These methods are common in the spatial statistics literature, but much development is still being done in the area to accommodate more complex processes and “big data” applications. Newer approaches are based on restricting models to neighbor-based representations or reformulating the random spatial process in terms of a basis expansion. There are many computational and flexibility advantages to these approaches, depending on the specific implementation. Complexity is also increasingly being accommodated through the use of the hierarchical modeling paradigm, which provides a probabilistically consistent way to decompose the data, process, and parameters corresponding to the spatial or spatio-temporal process.
Perhaps the biggest challenge in modern applications of spatial and spatio-temporal statistics is to develop methods that are flexible yet can account for the complex dependencies between and across processes, account for uncertainty in all aspects of the problem, and still be computationally tractable. These are daunting challenges, yet it is a very active area of research, and new solutions are constantly being developed. New methods are also being rapidly developed in the machine learning community, and these methods are increasingly more applicable to dependent processes. The interaction and cross-fertilization between the machine learning and spatial statistics community is growing, which will likely lead to a new generation of spatial statistical methods that are applicable to climate science.
Post-glacial aquatic ecosystems in Eurasia and North America, such as the Baltic Sea, evolved in the freshwater, brackish, and marine environments that fringed the melting glaciers. Warming of the climate initiated sea level and land rise and subsequent changes in aquatic ecosystems. Seminal ideas on ancient developing ecosystems were based on findings in Swedish large lakes of species that had arrived there from adjacent glacial freshwater or marine environments and established populations which have survived up to the present day. An ecosystem of the first freshwater stage, the Baltic Ice Lake initially consisted of ice-associated biota. Subsequent aquatic environments, the Yoldia Sea, the Ancylus Lake, the Litorina Sea, and the Mya Sea, are all named after mollusc trace fossils. These often convey information on the geologic period in question and indicate some physical and chemical characteristics of their environment. The ecosystems of various Baltic Sea stages are regulated primarily by temperature and freshwater runoff (which affects directly and indirectly both salinity and nutrient concentrations). Key ecological environmental factors, such as temperature, salinity, and nutrient levels, not only change seasonally but are also subject to long-term changes (due to astronomical factors) and shorter disturbances, for example, a warm period that essentially formed the Yoldia Sea, and more recently the “Little Ice Age” (which terminated the Viking settlement in Iceland).
There is no direct way to study the post-Holocene Baltic Sea stages, but findings in geological samples of ecological keystone species (which may form a physical environment for other species to dwell in and/or largely determine the function of an ecosystem) can indicate ancient large-scale ecosystem features and changes. Such changes have included, for example, development of an initially turbid glacial meltwater to clearer water with increasing primary production (enhanced also by warmer temperatures), eventually leading to self-shading and other consequences of anthropogenic eutrophication (nutrient-rich conditions). Furthermore, the development in the last century from oligotrophic (nutrient-poor) to eutrophic conditions also included shifts between the grazing chain (which include large predators, e.g., piscivorous fish, mammals, and birds at the top of the food chain) and the microbial loop (filtering top predators such as jellyfish). Another large-scale change has been a succession from low (freshwater glacier lake) biodiversity to increased (brackish and marine) biodiversity. The present-day Baltic Sea ecosystem is a direct descendant of the more marine Litorina Sea, which marks the beginning of the transition from a primeval ecosystem to one regulated by humans. The recent Baltic Sea is characterized by high concentrations of pollutants and nutrients, a shift from perennial to annual macrophytes (and more rapid nutrient cycling), and an increasing rate of invasion by non-native species. Thus, an increasing pace of anthropogenic ecological change has been a prominent trend in the Baltic Sea ecosystem since the Ancylus Lake.
Future development is in the first place dependent on regional factors, such as salinity, which is regulated by sea and land level changes and the climate, and runoff, which controls both salinity and the leaching of nutrients to the sea. However, uncertainties abound, for example the future development of the Gulf Stream and its associated westerly winds, which support the sub-boreal ecosystems, both terrestrial and aquatic, in the Baltic Sea area. Thus, extensive sophisticated, cross-disciplinary modeling is needed to foresee whether the Baltic Sea will develop toward a freshwater or marine ecosystem, set in a sub-boreal, boreal, or arctic climate.