Guest column by Robert L. Bradley Jr.
“Climate modeling is central to climate science….” (Stephen Koonin, below)
When the history of climate modeling comes to be written in some distant future, the major story may well be how the easy, computable answer turned out to be the wrong one, resulting in overestimated warming and false scares from the enhanced (man-made) greenhouse effect.
Meanwhile, empirical and theoretical evidence is mounting toward this game-changing verdict despite the best efforts of the establishment to look the other way.
Consider a press release June 3 from the University of Colorado Boulder, “Warmer Clouds, Cooler Planet,” subtitled “precipitation-related ‘feedback’ cycle means models may overestimate warming.”
“Today’s climate models are showing more warmth than their predecessors,” the announcement begins.
But a paper published this week highlights how models may err on the side of too much warming: Earth’s warming clouds cool the surface more than anticipated, the German-led team reported in Nature Climate Change.
“Our work shows that the increase in climate sensitivity from the last generation of climate models should be taken with a huge grain of salt,” said CIRES Fellow Jennifer Kay, an associate professor of atmospheric and oceanic sciences at CU Boulder and co-author on the paper.
The press release goes on to state how incorporating this negative feedback will improve next-generation climate models, something that is of the utmost importance given the upcoming Sixth Assessment of the Intergovernmental Panel on Climate Change (IPCC). But will conflicted modelers and the politicized IPCC be upfront with the elephant in the room?
Background
Strong positive feedbacks from the release of carbon dioxide (CO2) and other manmade greenhouse gases (GHG) are what turn a modest and even positive warming into the opposite. The assumption has been that increased evaporation in a warmer world (from oceans, primarily) causes a strongly positive feedback, doubling or even tripling the primary warming.
In technical terms, water molecules trap heat, and clouds or vapor in the upper tropical troposphere—where the air is extremely dry—trap substantially more heat, thickening the greenhouse. How water inhabits this upper layer (≈30,000–50,000 feet) to either block (magnify) or release (diminish) the heat is in debate, leaving the sign of the externality unknown for climate economics. And it is the upper troposphere where climate models are data-confounding.
Assuming fixed relative atmospheric humidity allows modelers to invoke ceteris paribus against altered physical processes that might well negate the secondary warming. This controversial assumption opens the door for hyper-modeling that is at odds with reality. (For economists, the analogy would be assuming “perfect competition” to unleash hyper theorizing.)
For decades, model critics have questioned the simplified treatment of complexity. Meanwhile, climate models have predicted much more warming than has transpired.
Theoreticians have long been at odds with model technicians. MIT’s Richard Lindzen, author of Dynamics in Atmospheric Physics, has advanced different hypotheses about why water-vapor feedback is much less than modeled. Judith Curry, whose blog Climate Etc. is a leading source to follow physical-science and related developments, is another critic of high-sensitivity models.
“There’s a range of credible perspectives that I try to consider,” she states. “It’s a very complex problem, and we don’t have the answers yet.”
And now we have way too much confidence in some very dubious climate models and inadequate data sets. And we’re not really framing the problem broadly enough to … make credible projections about the range of things that we could possibly see in the 21st century.
Mainstream Recognition
Climate scientists know that climate models are extremely complicated and fragile. In What We Know About Climate Change (2018, p. 30), Kerry Emanuel of MIT explains:
Computer modeling of global climate is perhaps the most complex endeavor ever undertaken by humankind. A typical climate model consists of millions of lines of computer instructions designed to simulate an enormous range of physical phenomena….
Although the equations representing the physical and chemical processes in the climate system are well known, they cannot be solved exactly. …. The problem here is that many important processes happen at much smaller scales.
The parameterization problem is akin to the fallacies of macroeconomics, where the crucial causality of individual action is ignored. Microphysics is the driver of climate change, yet the equations are unsettled and sub-grid scale. Like macroeconomics, macro-climatology should have been highly qualified and demoted long ago.
My mentor Gerald North, former head of the climatology department at Texas A&M, had a number of observations about the crude, overrated nature of climate models back in 1998–99 that are still relevant today.
We do not know much about modeling climate. It is as though we are modeling a human being. Models are in position at last to tell us the creature has two arms and two legs, but we are being asked to cure cancer.
There is a good reason for a lack of consensus on the science. It is simply too early. The problem is difficult, and there are pitifully few ways to test climate models.
One has to fill in what goes on between 5 km and the surface. The standard way is through atmospheric models. I cannot make a better excuse.
The different models couple to the oceans differently. There is quite a bit of slack here (undetermined fudge factors). If a model is too sensitive, one can just couple in a little more ocean to make it agree with the record. This is why models with different sensitivities all seem to mock the record about equally well. (Modelers would be insulted by my explanation, but I think it is correct.)
[Model results] could also be sociological: getting the socially acceptable answer.
The IPCC 5th assessment (2013), the “official” or mainstream report, recognizes fundamental uncertainty while accepting model methodology and results at face value. “The complexity of models,” it is stated (p. 824), “has increased substantially since the IPCC First Assessment Report in 1990….”
However, every bit of added complexity, while intended to improve some aspect of simulated climate, also introduces new sources of possible error (e.g., via uncertain parameters) and new interactions between model components that may, if only temporarily, degrade a model’s simulation of other aspects of the climate system. Furthermore, despite the progress that has been made, scientific uncertainty regarding the details of many processes remains.
The humbling nature of climate modeling was publicized by The Economist in 2019. “Predicting the Climate Future is Riddled with Uncertainty” explained:
[Climate modeling] is a complicated process. A model’s code has to represent everything from the laws of thermodynamics to the intricacies of how air molecules interact with one another. Running it means performing quadrillions of mathematical operations a second—hence the need for supercomputers.
[S]uch models are crude. Millions of grid cells might sound a lot, but it means that an individual cell’s area, seen from above, is about 10,000 square kilometres, while an air or ocean cell may have a volume of as much as 100,000km3. Treating these enormous areas and volumes as points misses much detail.
Clouds, for instance, present a particular challenge to modellers. Depending on how they form and where, they can either warm or cool the climate. But a cloud is far smaller than even the smallest grid-cells, so its individual effect cannot be captured. The same is true of regional effects caused by things like topographic features or islands.
Building models is also made hard by lack of knowledge about the ways that carbon—the central atom in molecules of carbon dioxide and methane, the main heat-capturing greenhouse gases other than water vapour—moves through the environment.
“But researchers are doing the best they can,” The Economist concluded.
Climate models, in fact, are significantly overestimating warming, even by one-half. And the gap is widening as a coolish 2021 is well underway. And as for the future, anthropogenic warming is constrained by the logarithmic rather than linear effect of GHG forcing. The saturation effect means that as the atmosphere contains more CO2, the warming increase from each new unit of CO2 becomes less and less. The warming from a doubling of CO2 doesn’t get repeated by adding the same amount again (tripling the original) but by doubling the new concentration—i.e., quadrupling the original.
The mitigation window is rapidly closing, in other words, explaining the shrill language from prominent politicians. But it is the underlying climate models, not the climate itself, that are running out of time.
“Unsettled” Goes Mainstream
The crude methodology and false conclusions of climate modeling are emerging from the shadows. Physicist and computer expert Steven Koonin, in his influential Unsettled: What Climate Science Tells Us, What it Doesn’t, and Why It Matters (chapter 4) explains:
Climate modeling is central to climate science…. Yet many important phenomena occur on scales smaller than the 100 km (60 mile) grid size (such as mountains, clouds, and thunderstorms), and so researchers must make “subgrid” assumptions to build a complete model….
Since the results generally don’t much look like the climate system we observe, modelers then adjust (“tune”) these parameters to get a better match with some features of the real climate system.
Undertuning leaves the model unrealistic, but overtuning “risks cooking the books—that is, predetermining the answer,” adds Koonin. He then quotes from a paper co-authored by 15 world-class modelers:
… tuning is often seen as an unavoidable but dirty part of climate modeling, more engineering than science, an act of tinkering that does not merit recording in the scientific literature…. Tuning may be seen indeed as an unspeakable way to compensate for model errors.
Conclusion
Climate modeling has arguably been worse than nothing because false information has been presented as true and “consensus.” Alarmism and disruptive policy activism (forced substitution of inferior energies; challenges to lifestyle norms) have taken on a life of their own. Fire, ready, aim has substituted for prudence, from science to public policy.
Data continue to confound naïve climate models. Very difficult theory is slowly but surely explaining why. The climate debate is back to the physical science, from which it never should have left.
This article is adapted with minor revisions from one first published by the American Institute for Economic Research and is used here by permission of the author.
Robert L. Bradley Jr. is the founder and CEO of the Institute for Energy Research, the editor of MasterResource.org, and a Senior Fellow of the American Institute for Economic Research, which first published this article. He is author of eight books on energy history and public policy and blogs at MasterResource. Bradley received a B.A. in economics from Rollins College, an M.A. in economics from the University of Houston, and a Ph.D. in political economy from International College. He has been a Schultz Fellow for Economic Research and Liberty Fund Fellow for Economic Research, and in 2002 he received the Julian L. Simon Memorial Award for his work on energy and sustainable development.
Graph by Roy W. Spencer, University of Alabama Huntsville.
Leave a Reply