Climate Models and the National Assessment

Executive Summary

The U.S. National Assessment of the Potential Consequences of Climate Variability and Change for the Nation intends to “provide a detailed understanding of the consequences of climate change for the nation.” This report argues that the National Assessment will not be able to provide policymakers and the public with useful information on climate change because of its reliance on flawed computer climate models. These models, which are intended to describe climate only on a very large scale, are currently used by the National Assessment to describe possible scenarios of regional climate change in the U.S. Because current models cannot accurately represent the existing climate without manipulation, they are unlikely to render reliable global climate scenarios or provide useful forecasts of future climate changes in regions of the United States as small as the Midwest, West or South.

The Guide explains how General Circulation Models (GCMs) describe changes in the complex factors that make up our climate, such as atmospheric changes, interaction of the land, sea, and air, and the role of clouds in climate. The strengths and weaknesses of climate models are discussed and the report shows how researchers attempt to answer the important questions about global warming as they refine their use of GCMs.

The two climate models used in the U.S. National Assessment are then described with reference to their similarities and differences. The limitations of these models – the Canadian Global Coupled Model and the Hadley Climate Model from Great Britain – are outlined with special emphasis on their inability to provide useful regional scenarios of climate change. The report concludes with an analysis of how well these two models reproduce the present-day climate as a benchmark for their ability to reproduce future climate.

Key findings in this report include:

  • The utility of current GCMs is limited by our incomplete understanding of the climate system and by our ability to transform this incomplete understanding into mathematical representations. It is common practice to “tune” GCMs to make them represent current conditions more accurately, but the need for this manipulation casts serious doubt on their ability to predict future conditions. Because all factors are interconnected in climate modeling, an error in one field will adversely affect the simulation of every other variable.
  • To reduce complexity and computational time, GCMs treat surfaces as uniform and average the flows of moisture and energy between the land surface and the atmosphere over large areas. But the extensive variability of the land surface and the effects that even small-scale changes can have make modeling land-surface interactions quite difficult.
  • The National Assessment itself recognized that both the models that it selected provide a more extreme climate change scenario than other models that were available and that had been developed in the U.S.
  • Both models offer incomplete modeling of the effects of individual greenhouse gases, including water vapor and atmospheric sulfates. The CGCM1 in particular fails to model sea ice dynamics and offers a simplistic treatment of land-surface hydrology. Predicted temperature increases over various regions of the United States differ considerably between the two models; these predictions fail to correspond with observed precipitation variability and contradict each other.
  • In general, the Hadley model simulation is closer to the observed climate in the United States than the Canadian simulation, although both models produced considerable differences from observations. This, again, cast serious doubt on the models’ ability to simulate future climate change.

Conclusion: Given these uncertainties, using the available GCMs to assess the potential for climate change in specific regions is not likely to yield valid and consistent results. GCMs can provide possible scenarios for climate change, but at the present level of sophistication, they are not reliable enough to be used as the basis for public policy. Using GCMs to make predictions about local climate change in the United States is not legitimate.

Introduction

In 1997, the United States Global Change Research Program began the “National Assessment of the Potential Consequences of Climate Variability and Change for the Nation” to investigate and evaluate the possible consequences of climate change for the country.

The Program’s findings, in the form of The National Assessment Synthesis Report, will be published in the near future.

In order to discuss the scientific uncertainties associated with such findings, the George C. Marshall Institute has commissioned a study “A Brief Guide to the Global Climate Change Models Used in the National Assessment.”  This paper, by the noted climatologist David Legates, examines the current state of climate modeling and draws attention to weaknesses in the models used by the National Assessment. Because the Synthesis Report will affect, if not become the basis for, government policy on climate change, it is important for the American public to understand the scientific uncertainties inherent in climate modeling.

What Is a General Circulation Model (GCM)?

The word “model” usually conjures up images of a miniature replica of a real object. Model trains, automobiles, and airplanes, for example, are intended to be scale-reduced versions of the original. In science, the word “model” has a similar, but broader, meaning. Models can be physical replicas; for example, a model may be a smaller version of a larger habitat for a given animal or plant species. A model also can be a working representation of a difficult concept, such as a model of an atom. Usually, such models can be described by a set of mathematical equations rather than being a true physical replica.

General circulation models (or GCMs) are a further example of the latter definition. They are not physical reproductions of the earth and its climate system but instead are mathematical representations of the physical laws and processes that govern the climate of the earth. They are computer models ? computer programs that are able to solve the complex interactions among these mathematical equations to estimate fields of air temperature, humidity, winds, precipitation, and other variables that define the earth’s climate. General circulation models are limited by our understanding of what drives, shapes and affects the climate of the earth as well as how the earth’s climate responds to a variety of external forces – in addition to being limited by the speed and capabilities of our computers.

The Concept of Space in GCMs. If we were to build a GCM, our first and fundamental decision would be the selection of the model’s concept of space ? how we choose to physically describe the three dimensions of the atmosphere. Here we have two fundamental choices: the model can either be a Cartesian grid model or it can be a spectral model.

Conceptually, the Cartesian grid model is easier to understand. Consider a set of building blocks that might be toys for a young child. We could arrange the blocks in the form of a regular lattice where the face of every block is flush against another block. We could make this wall of blocks several blocks high and several blocks wide. Thus, each block in the center of the wall is adjacent to six other blocks – one above, one below, and four adjacent to each horizontal face.

In a Cartesian grid model, we extend the concept of these building blocks to represent hypothetical “blocks” of atmosphere, stacked adjacent to and on top of each other in the same manner we stacked the child’s building blocks (Figure 1). Since the earth’s surface is a sphere, however, we extend these blocks around the globe until they reach the blocks on the other end. Thus, in our climate model, every block has an adjacent partner on each of its four horizontal faces – our “wall” of blocks extends around the globe and covers the entire earth’s surface. The blocks that make up the bottom layer are in contact with the earth’s surface and can be used to describe the interactions between the atmosphere and the land surface.

legatesfig1a

Figure 1.

Since each block has six faces, we will simply describe (mathematically) the flows of energy, mass, and other physical quantities between each one of our atmospheric boxes and the six adjacent boxes. We assume that the conditions in each box are homogeneous; temperature, humidity, and other atmospheric variables can only vary between boxes but not within a box. Each of these variables is associated with the location (both horizontally and vertically) of the center of the box.

A typical Cartesian grid model will employ a lattice of approximately 72 boxes by 90 boxes (2½º of latitude by 4º of longitude) stacked about 15 boxes high. The more boxes that are employed, the more spatial resolution is obtained, but at the expense of increased computer time.

By contrast, the spectral model does not use the concept of “boxes” at all but relies on a framework that is harder to grasp. Imagine a tabletop covered by several sheets of paper stacked on top of one another. Each sheet represents a different atmospheric layer. Vertically, the interaction between the layers is similar to the vertical interaction between the boxes that we saw with the Cartesian grid model. However, the horizontal representation of the field is not described by interactions among boxes; but rather, it is presented in the form of waves. Just as ocean waves carry energy through the ocean, we can represent horizontal flows of energy and mass along each atmospheric layer using a series of waves having different amplitudes and frequencies (called spherical harmonics). Although these waves are difficult to describe, one can think of them as a series of sine and cosine curves (true really only in the east-west direction) that, when taken together, can be used to represent the spatial variations of any field (Figure 2).

legatesfig1b

Figure 2.

Representation of three-dimensional space in general circulation models (GCMs). Cartesian GCMs (left) use a concept similar to a series of stacked boxes, while spectral GCMs (right) use a series of waves and smoothly varying functions. Both representations, however, use the Cartesian analog (i.e., stacked boxes or stacked waves in their representation of the vertical dimension. (Figure taken from Henderson-Sellers and McGuffie, 1987).

For the same spatial resolution, spectral models have the advantage over Cartesian grid models in that they can more compactly describe a field. Thus, computation times are reduced. Moreover, spatial resolutions can be changed more easily with a spectral model, which allows for more flexibility and adaptability.

Some have argued that Cartesian grid GCMs are more satisfactory than their spectral counterparts for a variety of reasons, including the fact that it is possible for spectral models to violate some of the fundamental laws of physics (to produce negative mass, a physical impossibility, for example.) This can occur since the use of waves (as in a spectral model) implies the field must be smoothly varying – a constraint that is often inappropriate for many atmospheric fields. Precipitation, for example, exhibits significantly steep spatial gradients, which makes the representation of a precipitation field using smoothly varying wave patterns very difficult.

How a GCM Describes Atmospheric Processes. Having chosen our framework for spatial representation, the next step is to describe the atmospheric processes that govern the earth’s climate. First, we must define the equations that drive atmospheric dynamics – processes that lead to atmospheric motions. We must require that the model conserve energy, since we know from the first law of thermodynamics that energy cannot be created or destroyed. Our GCM also must conserve mass. Momentum also must be conserved since an object in motion tends to remain in motion. We also use the ideal gas law, which states that the pressure of the atmosphere is proportional to both its density and temperature. There are additional equations that describe more complicated atmospheric properties that also must be conserved.

Next, we define the equations governing the physics of the atmosphere – processes that describe energy exchanges within the atmosphere. In GCMs, three-dimensional, time-dependent equations govern the rate of change of atmospheric variables including air temperature, moisture, horizontal winds and the height for each atmospheric layer, and surface air pressure. These equations describe, for example, the effect of vertical air motions and absorbed energy on air temperature, the rate of atmospheric pressure changes with respect to height in the atmosphere, relationships between atmospheric moisture, cloud formation and condensation/precipitation, and the interaction between clouds and the energy balance. Clouds can play a key role in the energy balance of the earth since they reflect incoming energy from the sun, but trap outgoing “heat” energy from the earth. Thus, modeling of clouds and their effects on energy and moisture is important to GCM simulations of climate change.

Except for the representation of clouds, all spectral GCMs at this point are essentially the same, and so too are all Cartesian GCMs. The reason is that there really are not many ways to describe the dynamics and physics of the atmosphere within our chosen spatial framework. Where models differ substantially is with regard to their modeling of atmospheric interactions with the earth’s surface.

Modeling Surface Processes in a GCM. The critical component of most GCMs is their treatment of interactions between the atmosphere and the earth’s surface. Oceans, lakes, and other bodies of water provide substantial amounts of moisture and energy to the atmosphere. Modeling them is important since nearly three-quarters of the earth is covered by water and the ocean is a fluid in constant motion. Thus, in addition to the atmosphere, the oceans provide an important mechanism for the redistribution of energy around the earth. Their circulation must be modeled and the energy and moisture transfers between the ocean and the atmosphere must be appropriately described. In addition, the world’s oceans are saline and quite deep. Interactions between temperature and salinity (called the thermohaline circulation) are extremely important to the earth’s climate but are not well understood. Moreover, deep ocean water can store atmospheric gases and then release them at a much later time when the concentration of these gases is lower. Modeling such processes within a GCM is extremely difficult.

Sea Ice. With respect to modeling the oceans, sea ice plays an important role in shaping the earth’s climate. When air temperatures drop below freezing, the surface of the ocean may freeze, creating a barrier to energy and moisture flows between the ocean waters and the atmosphere above. In the presence of sea ice, the atmosphere is deprived of moisture and energy from the relatively warmer waters below, thus causing the atmosphere to become colder and drier and causing a positive feedback to sea ice formation. Sea ice, however, moves with the combined forces (often in different directions) of oceanic circulation and surface winds. This causes sea ice to break in some places (called leads) and pile up to form hills and ridges in others. Thus, sea ice is not uniform and modeling these interactions is extremely difficult and not well understood.

Surface-Atmospheric Interactions. But the biggest challenge to GCM modeling is the representation of the interactions between the atmosphere and the land surface. If you take a quick glance around, you will see that the land surface is quite varied; trees, shrubs, grasses, roads, houses, streams, etc. often coexist within a single square mile. In a Cartesian grid GCM, however, the “boxes” are often several hundred miles wide and we must assume that everything within the box is homogeneous. Spectral GCMs have similar spatial resolutions and also assume that everything, including the land surface, is smoothly varying. Thus, the varied nature of the earth’s surface makes modeling the land very difficult.

Couple that now with the fact that interactions between the land surface and the atmosphere are extremely complex. For example, plants try to conserve water and so shut down many vital functions when water supplies run low. However, each plant species behaves differently; for example, trees have deeper roots than short grasses and therefore their access to water is different. Plant use of water, even in times of ample moisture supply, differs widely among species. Snow and ice cover are dictated by air temperature and precipitation, but old snow has different characteristics than newly fallen snow. To reduce this complexity, GCMs simply try to simulate the flows of moisture and energy between the land surface and the atmosphere averaged over a large area. But given the extensive variability of the land surface and the effects that even small, sub-resolution scale changes can have – well, to say that modeling land-surface interactions is difficult would be an extreme understatement!

The GCMs of the National Assessment

Rather than discuss all possible ways in which climate models can represent various climate-shaping processes, let us focus on the two models used in the United States National Assessment – GCMs from the Canadian Centre for Climate Modeling and Analysis and the Hadley Centre for Climate Prediction and Research. Both models are well documented and results from and specifications of both models are widely available to the scientific community. For selection by the National Assessment Synthesis Team (US National Assessment, 2000), climate models were chosen based on the criteria that the model must:

1. be a coupled atmosphere-ocean general circulation model that includes a comprehensive representation of the atmosphere, oceans, and land surface,

2. include the dirunal cycle of solar radiation to provide estimates of fluctuations in maximum and minimum air temperature and to represent the development of summertime convective rainfall,
3. be capable, to the best extent possible, of representing significant aspects of large-scale climate variations (e.g. El Niño/Southern Oscillation),
4. provide the highest practicable spatial and temporal resolution — about 200 miles in longitude and 175 to 300 miles in latitude — over the central United States,
5. allow for an interface with higher-resolution regional modeling studies,
6. be able to simulate the evolution of the climate from 1900 (beginning of the detailed historical record) to at least 2100 using a well-documented scenario for changes in atmospheric composition that accounts for time-dependent changes in greenhouse gas and aerosol concentrations,
7. have results that are available in time for use in the National Assessment,
8. have been developed by groups participating in the development of the Third Assessment Report of the IPCC for compatibility and the model must be well documented, and
9. allow for a wide array of results to be openly provided on the WWW.

Items (1-3) are important in that significant influences on the climate (diurnal cycle, oceans, land surface, and other processes) are included, although most models now do include these features and some of the assessments of model performance (e.g., simulation of El Niño/Southern Oscillation) are tenuous, given our limited understanding of the process. As expected, the chosen models must afford the highest spatial and temporal resolution (Item 4) and their results must be useful for regional-scale modeling applications (Item 5). For simulation purposes, the model data must be from a transient climate simulation (i.e., it allows for changes in atmospheric constituents over time) that extends both back and forward in time about 100 years from the present (Item 6). Finally, Items (7-9) are purely administrative criteria, although virtually all modeling groups participate in the IPCC and compatibility with the IPCC really should not be an issue (Item 8). It was deemed important to include at least two models in the National Assessment, to provide a more balanced presentation and allow for a spectrum of model uncertainties and differences. Both the Canadian Centre and Hadley Centre models fit these criteria.

The Canadian Climate Centre Model

The Canadian Global Coupled Model (CGCM1), developed by the Canadian Climate Centre, is a spectrally-based model with a spatial resolution of approximately 3.75° of latitude by 3.75° of longitude (about 260 miles by 185 miles over the United States) and ten vertical atmospheric layers. The ocean model coupled to this atmosphere has a spatial resolution of 1.8° of latitude by 1.8° of longitude (about 125 miles by 90 miles) and twenty-nine vertical layers. Given the complexity and the importance of modeling the oceans, a higher spatial resolution is often required by most ocean model components of GCMs. In the oceans, we are interested in simulating the exchanges of energy and moisture between the ocean and the atmosphere, as well as simulating the redistribution of energy within the oceans. This redistribution of energy occurs both horizontally (ocean circulation) and vertically. Vertical motions also allow for heating and cooling of the deeper ocean waters and their absorption of greenhouse gases. This, of course, is immensely important in a proper simulation of the earth’s climate.

Because the ocean responds to different spatial and temporal scales than those which drive atmospheric processes, coupling an ocean model to an atmospheric GCM is a complicated task, often producing results that are completely unreasonable. To rectify such conditions, GCMs often resort to a “flux adjustment” of ocean-atmosphere interactions; that is, they force the exchanges of heat and moisture between the simulated oceans and the simulated atmosphere to meet prescribed distributions. This flux-adjustment process is used to dictate that the coupled model correctly simulates the oceanic circulation of salinity and temperature (i.e., the thermohaline circulation). In the case of the CGCM1, the model is flux-adjusted.

Sea ice modeling is even more tenuous than ocean modeling, but certainly as important. Many models incorporate both the formation and movement of sea ice (dynamics) as well as their inhibition of the exchange of heat and moisture between the ocean and the atmosphere (thermodynamics). In the case of the CGCM1, the thermodynamics are modeled, but sea ice dynamics are not. Seasonal distributions of sea ice are prescribed to be consistent with observations.

Equally difficult is the modeling of land surface interactions – exchanges of energy and moisture between the atmosphere and the vegetation/soil surface. Land surface models can be highly simplistic, where the surface color, temperature, and moisture characteristics correspond to average conditions and variations. In such formulations, the land surface hydrology is modeled by what is termed the “bucket method”. Soil water is held in a theoretical “bucket” – water can be put into the bucket (through precipitation) and removed from the bucket (through evaporation and plant transpiration). A simple function models the rate of water removal from the bucket by plant water usage and soil evaporation. In the CGCM1, the land surface hydrology is modeled by a modified bucket method. Seasonal and diurnal fluctuations in solar energy are also included in most models used today; this is true as well for the CGCM1.

Atmospheric chemistry. Atmosphere chemistry in some GCMs, and in the CGCM1 in particular, is treated crudely. Time-varying effects of individual greenhouse gases (e.g., carbon dioxide, methane, chlorofluorocarbons, nitrous oxide, and ozone) are not modeled, but rather, temporal increases in a single greenhouse gas – carbon dioxide – are used as a surrogate for changes in all the greenhouse gases together. Here, the assumption is that atmospheric greenhouse gas concentrations will increase 1% (compounded) per year until 2100. In other models, the individual effect of each greenhouse gas is considered separately. In addition to greenhouse gases, changing concentrations of sulfate aerosols also are important to modeling climate change. Atmospheric sulfates, large sulfur-based particles suspended in the atmosphere originating from both human and natural sources, reflect incoming solar energy, thereby diminishing the potential global warming signal. Although the chemistry can be complex, some models attempt to simulate changes in aerosol concentrations over time and the climate impact of these changes. The CGCM1, however, simply models aerosols as a change (increase) in the reflectance of solar energy reaching the surface of the earth, without modeling the actual dynamics and properties of sulfate aerosols.

legatesfig2

Figure 2: Simulations of climate change using the CGCM1 model with changes in greenhouse gas concentrations (GHG Only), greenhouse gases and atmospheric aerosols (GHG+Aerosols), and with no changes (Control). (Figure from Boer et al., 1992).

At equilibrium (when no further change in air temperature occurs), the response of the CGCM1 model to a doubling of concentrations of greenhouse gases (specifically, carbon dioxide) is an increase of 3.5°C (6.3°F) in the globally averaged air temperature (Boer et al., 1992), which occurs by about 2050 (Figure 2). Over the United States by 2030, the model simulates summer increases of between 1° and 3°C (1.8° to 5.4°F) over the entire United States. Winter increases of 2° to 4°C (3.6° to 7.2°F) are modeled over western and central areas of the United States while 0° to 2°C (0.0° to 3.6°F) changes are modeled over eastern portions. Winter precipitation increases in the west and decreases elsewhere while summer changes are largely unpredictable (both increases and decreases are observed).

The Hadley Centre Model

The Hadley Climate Model (HadCM2), developed by the Hadley Centre for Climate Prediction and Research of the United Kingdom Meteorological Office, is a Cartesian grid model with a spatial resolution of approximately 2.5° of latitude by 3.75° of longitude (about 175 miles by 185 miles over the United States) and nineteen vertical atmospheric layers. Its coupled ocean model has the same horizontal resolution with twenty vertical layers and also is flux-adjusted. In the HadCM2, sea ice dynamics are modeled, as well as their influence on the exchange of heat and moisture between the ocean and the atmosphere.

The HadCM2 uses a more sophisticated approach to modeling land surface hydrology than used in the CGCM1. Several soil layers are used and the flow of moisture between these soil layers (through percolation downward through the soil) is modeled. The HadCM2 model provides a more detailed and specific treatment of the plant canopy, including the area of ground covered by leaves and the response of the leaves to water stress. Both seasonal and diurnal cycles of solar energy variations are incorporated into the model.

The HadCM2 GCM applies the same modeling strategy as the CGCM1 for the treatment of atmospheric chemistry. Temporal changes in carbon dioxide only are specified. Individual effects of other greenhouse gases such as methane, nitrous oxide, and ozone, for example, are represented as the effect of an equivalent change in carbon dioxide. Atmospheric sulfates are modeled only as a change in the surface reflectance of solar energy (albedo) and their actual dynamics and the individual properties are not included. This is similar to the formulation used by the CGCM1.

legatesTable1

For a doubling of atmospheric carbon dioxide concentration, the response of the HadCM2 is an increase in the globally averaged air temperature of 2.6°C (4.7°F). Over the United States, the model simulates increases of from 1° to 3°C (1.8° to 5.4°F) over the eastern third of the nation (vs. 0 to 2 for the CGCM1 in the same region) and increases from 1° to 4°C (1.8° to 7.2°F) over the western two-thirds. Precipitation is modeled to increase in the western and eastern thirds of the nation during winter while changes in winter precipitation in the central Great Plains and summer precipitation everywhere is mixed (both increases and decreases are observed).

Consequences of GCM Scenarios

Limitations in climate modeling. GCMs are designed to be descriptions of the full three-dimensional structure of the earth’s climate and often are used in a variety of applications, including the investigation of the possible role of various climate forcing mechanisms and the simulation of past and future climates. However, we must remember several important issues.

First, GCMs are limited by our incomplete understanding of the climate system and how the various atmospheric, land surface, oceanic, and ice components interact with one another. In addition, GCMs are limited by our ability to transform this incomplete understanding into mathematical representations. We may have a general feel for the complex interrelationships between the atmosphere and the oceans, for example, but expressing this understanding in a set of mathematical equations for use in a computing model is much more difficult.

Second, GCMs are limited by their spatial and temporal resolutions. Computational complexity and restrictions on computing power reduce GCM simulations to coarse generalities. As a result, many small-scale features, which may have significant impact on the local, regional, or even global climate, are not represented. Thus, we must recognize that GCMs, at best, can only present a thumbnail sketch. Regional assessments over areas encompassing many GCM grid cells are the finest scale resolution that can be expected. It is inappropriate, and grossly misleading, to select results from a single grid cell and apply it locally. It cannot be over-emphasized that GCM representations of the climate can be evaluated at a spatial resolution no finer than large regional areas, seldom smaller than a region defined by a square 1,000 miles (at least several GCM grid cells) on a side. Even the use of “nested grid models” (models which take GCM output and resolve it to finer scale resolutions) does not overcome this limitation since results from the GCM simulation drive such models and no mechanism is available to feed back the results of such finer-scale models to the GCM.

A third limitation in GCMs is that given the restrictions in our understanding of the climate system and its computational complexity, some known phenomena are simply not reproduced in climate models. Hurricanes and most other forms of severe weather (e.g., nor’easters, thunderstorms, and tornadoes) simply cannot be represented in a GCM owing to the coarse spatial resolution. Other more complex phenomena resulting from interactions among the elements that drive the climate system may be limited or even not simulated at all. Phenomena such as El Niño and La Niña, the Pacific Decadal Oscillation, and other complex interrelationships between the ocean and the atmosphere, for example, are inadequately reproduced or often completely absent in climate model simulations. Their absence indicates a fundamental flaw in either our understanding of the climate system, our mathematical representation of the process, the spatial and temporal limitations imposed by finite computational power, or all three of the above.

An assessment of the efficacy of any climate model, therefore, must focus on the ability of the model to reproduce present climate conditions. If a model cannot simulate what we know to be true, then it is unlikely that model prognostications of climate change are believable. However, a word of caution is warranted. It is common practice to “tune” climate models so that they better resemble present conditions. This is widely acceptable, because many parameters in GCMs cannot be specified directly and their values must be determined through empirical trial-and-error. However, this raises the concern that a GCM may adequately simulate the present climate, not because the model correctly represents the processes that drive the earth’s climate, but rather because it has been tuned to do so. Thus, the model may appear to provide a good simulation of the earth’s climate, when in fact it may poorly simulate climate change mechanisms. In other words, a GCM may provide an adequate simulation of present-day climate conditions, but it does so for the wrong reasons. Model efficacy in simulating present-day conditions, therefore, is not a guarantee that model-derived climate change scenarios will be reasonable.

To address this question, modelers often employ simulations of past climates, such as the Holocene or the Pleistocene, to see if the model provides the kind of climate that we can infer existed during such epochs. Of course, our knowledge of pre-historical climate conditions is tenuous and extremely crude, which limits the utility of such evaluations.

A final limitation in climate modeling is that in the climate system, everything is interconnected. In short, anything you do wrong in a climate model will adversely affect the simulation of every other variable. Take precipitation, for example. Precipitation requires moisture in the atmosphere and a mechanism to cause it to condense (causing the air to rise over mountains, by surface heating, as a result of weather fronts, or by cyclonic rotation). Any errors in representing the atmospheric moisture content or precipitation-causing mechanisms will result in errors in the simulation of precipitation. Thus, GCM simulations of precipitation will be affected by limitations in the representation and simulation of topography, since mountains force air to rise and condense to produce orographic (mountain-induced) precipitation (e.g., the coastal mountain ranges of Washington and Oregon). Incorrect simulations of air temperature also will adversely affect the simulation of precipitation since the ability of the atmosphere to store moisture is directly related to its temperature. If winds, air pressure, and atmospheric circulation are inadequately represented, then precipitation will be adversely affected since the atmospheric flow of moisture that may condense into precipitation will be incorrect. Plant transpiration and soil evaporation also provide moisture for precipitation; therefore, errors in the simulation of soil moisture conditions will adversely affect the simulation of precipitation. Simulation of clouds influences the amount of solar energy reaching the ground that, in turn, affects estimates of surface heating and consequently the simulation of precipitation. Even problems in specifying oceanic circulation or sea ice concentrations will affect weather patterns, which affect precipitation simulations. In sum, the simulation of precipitation is adversely affected by inaccuracies in the simulation of virtually every other climate variable.

Inaccuracies in simulating precipitation, in turn, will adversely affect the simulation of virtually every other climate variable. Condensation releases heat to the atmosphere and forms clouds, which reflect energy from the sun and trap heat from the earth’s surface – both of which affect the simulation of air temperature. As a result, this can affect the simulation of winds, air pressure, and atmospheric circulation. Since winds drive the circulation of the upper layers of the ocean, the simulation of ocean circulation also is affected. Air temperature conditions also contribute to the model simulation of sea ice formation, which would be adversely affected. For the purposes of climate modeling, precipitation is the only source of soil moisture; hence, inadequate simulations of precipitation will adversely affect soil moisture conditions and land surface hydrology. Vegetation also responds to precipitation availability so that the entire representation of the biosphere can be adversely affected.

Clearly, the interrelationships among the various components that comprise the climate system make climate modeling difficult. Keep in mind, however, that it is not just the long-term average and seasonal variations that are of interest. Demonstrating that precipitation is highest over the tropical rainforests and lowest in the subtropical deserts is not enough. Climate change is likely to manifest itself in small regional fluctuations. Moreover, we also are interested in inter-annual (year-to-year) variability. A GCM that simulates essentially the same conditions year after year clearly is missing an important component of the earth’s climate. Thus, the evaluation of climate change prognostications using GCMs must be made in light of the model’s ability to represent the climate and its variability. Interestingly, the National Assessment admits, “results suggest that the GCMs likely do not adequately include all of the feedback processes that may be important in determining the long-term climate” (United States National Assessment, 2000:23).

It should be noted that GCMs are not weather prediction models. Their utility is not in predicting, for example, whether it will rain in southern England on the morning of July 14, 2087. Rather, we are interested in determining whether the probability of precipitation will be substantially different from what it is today – in both the frequency and intensity of precipitation events. In general, we want to know whether the summer of 2055 is likely to be warmer or colder than present conditions, and by how much. As such, GCMs are only used appropriately to address the likelihood of changes over large spatial and temporal scales – assessing changes for specific dates or locations is beyond the scope of GCM utility.

How the National Assessment employs models. In the United States National Assessment, three approaches are used to determine the anthropogenic effects of climate change. The first approach is to examine the historical record, back to the late 1800s, to look for trends or changes that might possibly be linked to human sources. Unfortunately, the climate record reflects not just changes linked to anthropogenic activities, but a whole host of fluctuations caused by natural sources and uncertainties induced by changes to the instrumentation, station network and its environment, etc. The second approach is to use “sensitivity/vulnerability analysis” to address the degree of change required to cause significant impacts in areas of critical human concern and its probability of occurrence. Such speculations are based, in large part, on the results of analysis from both the historical record and model prognostications.

The third approach used in the National Assessment is the one that we focus on here – the use of climate models (GCMs in particular) to assess the potential for anthropogenic climate change on a regional scale. While GCMs provide quantitative assessments of such changes (i.e., they assign numerical values to changes and their probabilities), the limitations discussed above lead to skepticism regarding such assessments. In particular, we need to pay close attention to the uncertainties or “error bars” associated with the numbers generated by the models. Indeed, the Draft of Chapter 1 of the National Assessment indicates that GCMs are not perfect predictors of future climates, but argue that they “can be used to provide important and useful information about potential long-term climate changes over periods of up to a few centuries on hemispheric scales and across the [United States], but care must be taken in interpreting regionally specific and short-term aspects of the model simulations” (US National Assessment, 2000:23). The National Assessment goes on to highlight all of the caveats associated with the use of model projections, but model results are nevertheless shown with high resolution and without assessment of uncertainties. This implies that the results are error-free and without uncertainty, which allows conclusions and further applications gleaned from these model results (in the National Assessment) to ignore these caveats and concerns.

In the National Assessment, as well as in most modeling applications, GCM estimates of climate change scenarios are developed by taking the difference between the model-simulated change and the model representation of the present climate conditions. For example, if the model simulated a present climate of 10°C (50°F) that was to change to 15°C (59°F) under a given climate change scenario, then the climate change prognostication would be for an increase of 5°C (9°F). For precipitation, the rate is computed as a percentage, not as a difference; thus, if for the present climate, we have a precipitation rate of 4 mm per day that changes to 6 mm per day under climate change, the climate change prognostication would be for an increase in precipitation of 50%. Note that the observed values are not used – thus, it is important that the model be compared to the observations to determine how reasonable these changes might be.

Limitations in interpreting results from the models used in the National Assessment. It is laudable that the National Assessment considered more than a single model. It also was significant that the two models are of different types – one a spectral GCM and the other a Cartesian grid GCM. As previously discussed, and as pointed out in Chapter 1 of the National Assessment, interpretation of the results from these two models must be done with care, owing to the inherent limitations noted above in applying the results from GCM simulations. Even if we accept this caveat, however, the choice of the two models recommended for use in the National Assessment, namely, the Canadian Climate Centre (CGCM1) and Hadley Centre (HadCM2) models, is rather odd. It is widely recognized, and even mentioned by the National Assessment, that the CGCM1 provides a more extreme climate change scenario than other models that were considered but not used. To a large extent, this same criticism holds for the HadCM2 as well. It is also puzzling that neither of the two selected models was developed by a group within the United States, especially when viable alternatives exist (e.g., models from the National Center for Atmospheric Research or the Geophysical Fluid Dynamics Laboratory).

In part, the extreme scenarios developed by these two models result from the use of overly simplistic formulations of key model components. For example, the CGCM1 has the simplest treatment of land surface hydrology of all models considered (five other models were considered but not recommended); namely, a bucket model for soil moisture. Other models use a soil layer model with an explicit treatment of vegetation interactions. Bucket models overly simplify and grossly bias the representation of the hydrological cycle (the flow of water from the oceans to the atmosphere to the land and back again). Since precipitation, soil evaporation, and plant transpiration are components of not only the water balance, but the energy balance as well, such simplistic treatments greatly undermine the ability of the model to represent the climate. It is surprising that the National Assessment used a model employing such a simplistic treatment of land surface hydrology, particularly in light of the fact that better alternatives exist.

With respect to sea ice models, the CGCM1 has the most simplistic treatment of all the models considered – it lacks a dynamic component that other models possess. Although sea ice modeling is very difficult, a proper sea ice model is important to simulate the fluxes of energy and moisture between the atmosphere and the ocean at high latitudes. Since virtually all models indicate the greatest response of air temperature by greenhouse gas forcing will occur in the high latitudes, selection of a model that incorporates an inferior sea ice component can lead to major errors and is extremely puzzling. In particular, it is likely to overemphasize the effect of high latitude warming, which, in part, may be a major reason why prognostications of the CGCM1 are on the extreme side.

Furthermore, both the CGCM1 and the HadCM2 models do not treat all greenhouse gases independently (their effect is lumped into an “effective” CO2 surrogate) and includes the effect of atmospheric aerosols only changing the surface reflectance of solar energy. Given the potential importance of sulfur masking/mitigation of the anthropogenic greenhouse gas change signal and the fact that concentrations of methane, an important greenhouse gas, are decreasing, this overly simplistic treatment may overstate the effect of an important component of the anthropogenic global warming issue.

In considering the effect of greenhouse gases, it must be remembered that the most important greenhouse gas is not carbon dioxide, but water vapor. As we saw earlier, treatment of the oceans and, in particular, the land surface hydrology play an important role in determining correct levels of atmospheric humidity. Inaccuracies in simulating precipitation also adversely affect atmospheric concentrations of water vapor. But couple this with the fact that the two models tend to provide estimates of surface air temperatures that are several degrees too cold. Since the amount of water vapor in the air at a relative humidity of 100% (saturated conditions) increases exponentially with increasing air temperature, the atmospheric moisture content is likely to be underestimated by a cold model. Water vapor has a relatively high specific heat – meaning it takes more energy to raise the temperature of a water vapor molecule. Dry air is easier to warm; hence it is easier to achieve warming in a model that starts out with less water vapor in its atmosphere. Furthermore, it takes energy to evaporate water – energy that in a drier atmosphere would contribute to additional warming.

In an evaluation of the inter-annual variability in climate models, Soden (2000) compared observations of precipitation variability with several GCMs, including those used in the National Assessment (Figure 3). He concluded, “Not only do the GCMs differ with respect to the observations, but the models also lack coherence among themselves?even the extreme models exhibit markedly less precipitation variability than observed.” Virtually no climate model adequately resolves the inter-annual climate variability.

legatesfig3

Figure 3.

Figure 3: Precipitation rate in mm day-1 as observed (thick solid line and as simulated by an ensemble of GCMs (thin solid line). Vertical lines on the GCM ensemble show the intra-annual variability among the GCMs mean. (from Soden, 2000)

Comparison of Models with Observations in the U.S. Earlier it was mentioned that it is important to evaluate the efficacy of the GCMs with respect to their ability to reproduce the present-day climate. Doherty and Mearns (1999) have provided a comparison of historical simulations of the two models used in the National Assessment against observations. In general, they conclude that both models have significant problems in their representation of topography – the western United States is represented simply as one large hill beginning at sea level along the West coast and descending into the Great Plains. This problem manifests itself in cold and wet biases over the Rocky Mountains. When these problems with topography are coupled with the high spatial variability and the coarse spatial resolution of the models, results of climate change scenarios for detailed regions in the western United States are, in their words, “highly questionable”. In general, the HadCM2 simulation is closer to the observed climate than that of the CGCM1, although both models exhibit considerable differences from the observations. They conclude, “researchers should exercise extreme caution in the conclusions they draw from impacts analysis using the output from these climate models, given the uncertainty of the model results, especially on a regional scale.”

With regard to air temperature, Doherty and Mearns (1999) mapped the differences between the model mean climatology and an air temperature climatology developed by Legates and Willmott (1990b). In addition to the overall cold bias of both models, Doherty and Mearns found that air temperatures over the northern United States and Canada differ from the observations by as much as 12°C (21.6°F)! Topographically induced underestimates in air temperature are obvious in both models over the Rocky Mountains. In the central Plains, both models overestimate air temperature by up to 6°C (10.8°F) in summer, which is likely to overestimate summer drying, leading to an overestimate of drought frequency. Overall, both models exhibit similar patterns of biases in air temperature with warmer-than-observed conditions in winter and autumn in the northern United States and colder-than-observed conditions in the western United States in all seasons. Both models make the central United States too warm in summer and autumn.

Precipitation is difficult to simulate in a GCM, owing to the interrelationships among other climate variables noted earlier. In addition, precipitation mechanisms occur at scales well below the spatial and temporal resolution of most GCMs, the precipitation forming process is not fully understood, and numerical instabilities may arise with small amounts of moisture. Doherty and Mearns (1999) also mapped differences between the model mean climatology and a precipitation climatology developed by Legates and Willmott (1990a). As with air temperature, considerable overestimates exist over the Rocky Mountains in both models as a direct result of their inadequate representation of topography – differences are as much as 6 mm day-1 (7.1 inches per month) are observed in parts of the Rocky Mountains. Note that this is twice the mean monthly precipitation in some areas! Overestimates also are observed in the northeastern United States in spring and summer by as much as 3 mm day-1 (3.5 inches per month) while precipitation in the southeastern United States and lower Mississippi River Basin during winter and summer is underestimated by as much as 3 mm day-1 (3.5 inches per month). Both models exhibit similar patterns of biases, although the regions of bias tend to be somewhat smaller in the HadCM2.

One conclusion of the National Assessment is of increased precipitation and storminess – the so-called “enhanced hydrological cycle”. The ramifications are obvious; more floods and droughts will increase the potential for loss of life and property and increase our uncertainty about the future. However, is this a correct conclusion? Karl and Knight (1998) noted, “Variability in much of the Northern Hemisphere’s mid-latitudes has decreased as the climate has become warmer. Some computer models also project decreases in variability.” This seems to be in direct opposition to the claims of both the Intergovernmental Panel on Climate Change (IPCC) and the National Assessment. Hayden (1999), in a paper written for and presented at a national conference to discuss the content of the National Assessment (and later published in a refereed journal), indicated that the observations show “there has been no trend in North America-wide storminess or in storm frequency variability found in the record of storm tracks for the period 1885-1996.  It is not possible, at this time, to attribute the large regional changes in storm climate to elevated atmospheric carbon dioxide.” With regard to the model projections, he states, “[Model] projections of North American storminess shows no sensitivity to elevated carbon dioxide. It would appear that statements about storminess based on [model] output statistics are unwarranted at this time. (…) It should also be clear that little can or should be said about change in variability of storminess in future, carbon dioxide-enriched years.”  Sinclair and Watterson (1999) further go on to conclude that for areas such as the United States, “doubled CO2 leads to a marked decrease in the occurrence of intense storms”. Both in general and in particular, GCMs do not exhibit an enhancement of the hydrologic cycle; nevertheless, the National Assessment ignores this fact.

Conclusion

In light of our discussion, climate models should be thought of as useful tools to assess our understanding of the climate system and to examine interrelationships among various components of the climate system. At present, and at least into the foreseeable future, the uncertainties associated with model simulations make their projections only a single possible scenario, at best. Historically, assessments of climate change have steadily become less extreme as more climate feedback mechanisms are included in the models. Overall, it appears that anthropogenic climate change estimates are still uncertain (given the discrepancies between most models) and scenarios derived from still incomplete GCMs should not be used to assess future climate change or make national assessments.

Bibliography

Boer, G.J., McFarlane, N.A., and Lazare, M. (1992): Greenhouse gas-induced climate change simulated with the CCC second-generation general circulation model. Journal of Climate, 5:1045-1077.

Doherty, R., and Mearns, L.O. (2000): A comparison of simulations of current climate from two coupled atmosphere-ocean global climate models against observations and evaluation of their future climates. Report in Support of the National Assessment.

Hayden, B.P. (1999): Climate change and extratropical storminess in the United States: An Assessment. Journal of the American Water Resources Association, 35(6):1387-1398.

Henderson-Sellers, A., and McGuffie, K. (1987): A Climate Modeling Primer. John Wiley & Sons, New York, 217pp.

Karl, T.R. and Knight, R.W. (1998): Secular trends of precipitation amount, frequency and intensity in the United States. Bulletin of the American Meteorological Society, 79(2):231-241.

Legates, D.R., and Willmott, C.J. (1990a). Mean seasonal and spatial variability in gauge-corrected, global precipitation. International Journal of Climatology, 10(2):111-127.

Legates, D.R., and Willmott, C.J. (1990b). Mean seasonal and spatial variability in global surface air temperature. Theoretical and Applied Climatology, 41(1):11-21.

Sinclair, M.R., and Watterson, I.G. (1999). Objective assessment of extratropical weather systems in simulated climates. Journal of Climate, 12(12):3467-3485

United States National Assessment (2000): Chapter 1 — Scenarios for Climate Variability and Change. National Assessment Synthesis Team Document, Washington, DC, Draft Report Version.

 

Partner & Fellow Blogs