Critics of CAGW alarmism have for several years had fun pointing out the large and growing discrepancy between computer climate model simulations of global average temperature and real-world observations, with this graph being one of our favorite exhibits:
Now Alex Sen Gupta, Senior Lecturer, School of Biological, Earth and Environmental Sciences at UNSW Australia, has come to the models’ defense, and John Cook, Climate Communication Research Fellow (note that this doesn’t make him a climatologist or any kind of natural scientist) at The University of Queensland has given Gupta’s article his imprimatur.
The heart of Gupta’s argument is this:
To understand what’s happening, it is critical to realise that the climate changes for a number of reasons in addition to CO₂. These include solar variations, volcanic eruptions and human aerosol emissions.
The influence of all these “climate drivers” are included in modern climate models. On top of this, our climate also changes as a result of natural and largely random fluctuations – like the El Nino Southern Oscillation, ENSO and the Interdecadal Pacific Oscillation, [IPO] – that can redistribute heat to the deep ocean (thereby masking surface warming).
Such fluctuations are unpredictable beyond a few months (or possibly years), being triggered by atmospheric and oceanic weather systems. So while models do generate fluctuations like ENSO and IPO, in centennial scale simulations they don’t (and wouldn’t be expected to) occur at the same time as they do in observations.
But if the models’ errors were random, which one would expect if they represented an accurate understanding of how Earth’s chaotic coupled non-linear fluid-dynamic climate system operates, they would be equally distributed as warmer and cooler than the real-world observations. But they aren’t. Roughly 95% to 98% are warmer than the observations. Hence, the errors aren’t random but are the product of bias built into the models.
Gupta also presents a graph that comparing model simulations with real-world observations from 1900 to the present. The correlation between simulations and observations appears much better there than in the graph above. Gupta claims that shows the models really are pretty good at century-long simulations, even if they miss badly sometimes on decadal (or bidecadal) time scales.
But don’t be fooled. The modelers already knew the observed temperatures for the first 90 to 100 years of the period and so could fit their models to them. That’s a definite no-no.
For a good critique of Gupta’s piece, see Eric Worrall’s “Claim: Data does not prove that climate models are wrong,” at WattsUpWithThat.com. It’s all worth reading, but here’s the heart of it:
The problem with this claim is that, as Gupta says, the climate models are supposed to take these random fluctuations into account. Climate models are supposed to accommodate randomness, by providing a range of predicted values – the range is produced by plugging in different values for the random elements which cannot be predicted. However, observations are right on the lower border of that range. The divergence between climate models and predictions is now so great, that climate models are on the brink of being incontrovertibly falsified.
As Judith Curry recently said, If the pause continues for 20 years (a period for which none of the climate models showed a pause in the presence of greenhouse warming), the climate models will have failed a fundamental test of empirical adequacy.
This is important, because it strikes at the heart of the claim that climate models can detect human influence on climate change. If climate models cannot model climate, if the models cannot be reconciled with observations, how can the models possibly be useful for attributing the causes climate change? If scientists defending the models claim the discrepancy is because of random fluctuations in the climate, which have pushed the models to the brink of falsification, doesn’t this demonstrate that, at the very least, the models very likely underestimate the amount of randomness in the climate? Is it possible that the entire 20th century warming might be one large random fluctuation? …
If current mainstream climate models cannot predict the climate, then scientists have to consider the possibility that other models, with different assumptions, can do a better job. It is no accident that Monckton, Soon, Legates and Brigg’s paper on an irreducibly simple climate model, which does a better job of hind casting climate than mainstream models, has received over 10,000 downloads. As every scientific revolution in history has demonstrated, being right is ultimately more important than being mainstream, even if it sometimes takes a few years to win acceptance.
For now, mainstream climate scientists are mostly hiding in the fringes of their estimates. When they acknowledge it at all, they claim that the anomaly, the pause, is a low probability event which is still consistent with climate models. Hans Von Storch, one of the giants of German climate research, a few years ago claimed that 98% of climate models cannot be reconciled with reality – which still, for now, leaves 2% possibility that climate scientists are right.
Is the world really preparing to spend billions, trillions of dollars, on a 2% bet?
One hopes not.
Leave a Reply