[Editor’s note: This item is a little longer and more technical than most we carry, but reading it carefully can provide a good education in the science of the climate debate. –ECB]
For more than 100 years, climate scientists have fully understood that if all else were held constant, an increase in the atmospheric concentration of carbon dioxide (CO2) would lead to an increase in the near-surface air temperatures. The problem becomes a lot more complicated in the real world when we consider that “all else” cannot be held constant and there are a lot more changes occurring at any one time than just the concentration of CO2. Once the temperature of the Earth starts inching upward, changes immediately occur to atmospheric moisture levels, cloud patterns, surface properties, and on and on. Some of these changes, like the additional moisture, amplify the warming and represent positive feedback mechanisms. Other consequences, like the development of more low clouds, would act to retard or even reverse the warming and represent negative feedbacks. Getting all the feedbacks correct is critical to predicting future conditions, and these feedbacks are simulated numerically in global climate general circulation models (GCMs). Herein lies a central component of the great debate — some GCMs predict relatively little warming for a doubling of CO2, and others predict substantial warming for the same change in atmospheric composition.
If that is not enough, changes in CO2 in the real world would almost certainly be associated with other changes in the atmosphere – sulfur dioxide, mineral aerosols (dust), ozone, black carbon, and who knows what else would vary through time and complicate the “all else held constant” picture. By the way, the Sun varies its output as well. And when discussing climate change over the next century, even more uncertainties come from estimations of economic growth, adoption of various energy alternatives, human population growth, land use changes, and … you get the message.
However, the fundamental question in the greenhouse debate still comes down largely to a question of climate sensitivity defined as the change in global temperature for change in radiative forcing associated with varying levels of atmospheric CO2. The United Nations Intergovernmental Panel on Climate Change (IPCC) suggests that the sensitivity is between 0.48 and 1.40 degrees Kelvin (K) per one Watt per square meter (Wm-2) which translates into a global warming of 2.0 K to 4.5 K for a doubling of CO2 concentration (1 degree K equals one degree Celsius which equals 1.8 degrees Fahrenheit). Rather than turn this into a review of a physics course, what we have is the IPCC predicting global warming of 3.2°F to 7.2°F for a doubling of CO2 concentration. Others have shown in very credible professional journals that there is a 66% chance of the IPCC being right in their estimate – this provides the fodder for alarmists to suggest that IPCC acknowledges the possibility of a global warm up of 10°F for a doubling of CO2.
To say the least, these numbers are hotly debated in the climate community. A recent article in Geophysical Research Letters presents an interesting approach to pinning down the critical sensitivity value (K/Wm-2) for elevated levels of CO2. The article is by Petr Chylek and Ulrike Lohmann of New Mexico’s Los Alamos National Laboratory and Switzerland’s Institute for Atmospheric and Climate Science; funding was provided by the Los Alamos Laboratory. The team decided to re-examine the temperature, CO2, methane, and dust record from the Vostok ice core extracted from a site in Antarctica. Although the core record goes back nearly a half million years, Chylek and Lohmann elected to restrict their primary analysis to the past 42,000 years.
As seen in Figure 1, the core reveals that we clearly escaped from an ice age around 15,000 years ago as we moved into the modern, relatively warm Holocene period, but the core also shows that the Earth experienced a cooling from 42,000 years ago to the Last Glacial Maximum (LGM). They recognize that the Vostok data represent Antarctic conditions, not true global conditions, and they used a variety of scenarios to estimate global conditions from what was observed in Antarctica. To make a long story short, the authors used the cooling from 42,000 years ago to the LGM and the warming from 15,000 years ago to the near present to estimate the climate sensitivity parameter.
By combining temperatures, carbon dioxide concentrations, methane concentrations and importantly, dust amounts determined from the ice core during the past 42,000 years, the authors were able to derive the climate sensitivity from the combined variations for these factors. One of their largest uncertainties surrounded the dust amounts, and so Chylek and Lohmann turned to a climate model to see if changes in atmospheric dustiness could have the magnitude of the effect on global temperatures (and thus climate sensitivity) that they had determined empirically. The modeled results were consistent with their other calculations, giving them added confidence in their calculations.
The reason they were looking for independent confirmation was that their findings for climate sensitivity were near the low end of the bounding range given by the IPCC—and that means they are going to be subject to an endless amount of scrutiny from those folks who want potential global warming to seem as bad as possible.
Here are the concluding paragraphs Chylek and Lohmann paper:
We have shown that the ice core data from the warm period (around 42 KYBP) to the LGM and from the LGM to Holocene transition can be used to constrain the dust aerosol radiative forcing during these transitions. We find the dust radiative forcing to be 3.3 ± 0.8 W/m2. Assuming that the climate sensitivity is the same for both transitions, we obtain [the climate sensitivity] = 0.49 ± 0.07 K/Wm_2. This suggests 95% likelihood of warming between 1.3 and 2.3 K due to doubling of atmospheric concentration of CO2 (assuming that the CO2 doubling produces the radiative forcing of 3.7 W/m2 according to the IPCC 2007 report). The ECHAM5 model simulation suggests that during the LGM the global average aerosol optical depth might have been almost twice the current value.
The results compatible with climate sensitivity around or below 2 K for doubling of CO2 were recently deduced using cloud resolving models incorporated within GCMs [Miura et al., 2005; Wyant et al., 2006], from observational data [Chylek et al., 2007; Schwartz, 2007], and from a set of GCM simulations constrained by the ERBE (Earth Radiation Budget Experiment) observations [Forster and Gregory, 2006]. All these results together with our work presented in this paper support the lower end of the climate sensitivity range of 2 to 4.5 K suggested by the IPCC 2007 report [Solomon et al., 2007].
To long-time readers of World Climate Report (and its predecessors), these results should hardly come as much of surprise. For at least a good 7 or 8 years we have repeatedly been telling you that you should be expecting about a 1.5 to 2.0ºC of warming from greenhouse gas increases this century. Chylek and Lohmann’s findings are simply further confirmation of this.
The biggest thing to take home in all of this is that the less the temperature rise, the less the chance for major disruption, such as a large sea level rise, at least anytime soon. That means we have more time to figure out a solution.
Assuredly, had Chylek and Lohmann discovered that IPCC was underestimating the climate sensitivity, they would have been a front page news story the world over. Instead, they found that IPCC is likely overestimating the climate sensitivity to CO2, so they were reduced to coverage only at World Climate Report.
Reference:
Chylek, P., and U. Lohmann, 2008. Aerosol radiative forcing and climate sensitivity deduced from the Last Glacial Maximum to Holocene transition. Geophysical Research Letters, 35, L04804, doi:10.1029/2007GL032759.
Originally Published on WorldClimateReport.com.
Featured Image Courtesy of David Castillo-Dominici/Freedigitalphotos.net