For years I have been pointing out that the super-sophisticated computer climate models on which the IPCC, national environment agencies, national academies of science, and of course the many climate-alarmist advocacy groups and journalists depend for their predictions of catastrophic anthropogenic global warming (CAGW)
- predict, on average, 2 to 3 times the warming actually observed over the relevant periods;
- that they failed to predict the complete lack of statistically significant global warming from about early 1997 to … whatever the end date, right up to late 2015 (after which a super-El Niño shortened the “pause” for a few months, though rapid cooling in May/June and the likelihood of a strong La Niña taking over is likely to restore the “pause” to full length and then draw it out longer);
- and—my focal point for this blog post—that 95% predict more warming than observed, which implies that their errors are not random (in which case they’d have been about as frequently below as above, and by about the same amounts) but driven by some kind of bias (whether honest mistake or dishonest fudging) written right into the models.
From these observations I’ve inferred that the models provide no rational basis for any prediction about future global average temperature, and from that the conclusion that they also provide no rational basis for any policy.
Plenty of folks, held in thralldom by the mystery of computers and white-coat-clad scientists, have wondered how the models could be so systematically mistaken.
Dr. Patrick Frank, a chemist at the Synchrotron Radiation Lightsource (SLAC) at Stanford University and author of 68 peer-reviewed publications, explains that in gratifying detail in a lecture presented at this summer’s Doctors for Disaster Preparedness annual meeting, and the full video, with all his PowerPoint slides embedded, is well worth the watching.
About halfway through his presentation, Frank concentrates on the models’ inaccuracies in predicting cloud response to changing atmospheric CO2 concentration. From the IPCC’s own reports he extracts the information that their error margin is ~1.4 Watts per square meter, which is about 114 x larger than the variable. This uncertainty propagates through the step-wise model predictions out into the future, resulting in an uncertainty spread in the year 2100 of about 14 degrees C, as illustrated in the accompanying screen shot from his lecture. This doesn’t mean that global average temperature in 2100 could be 14 degrees higher, or 14 degrees lower, than predicted. It means we simply don’t know, at all, what it will be. It’s impossible to predict.
And that agrees nicely with what the IPCC itself stated, boldly, in the Summary of Policymakers of its 2001 Third Assessment Report: “The climate system is a coupled non-linear chaotic system, and therefore the long-term prediction of future climate states is not possible.”
If you want to understand what’s wrong with the climate models and why they can’t be relied on to inform any policy making, you could do much worse than to watch Frank’s highly informative presentation. And here’s the essence of his conclusion:
Seems strikingly like what I’ve said.
Leave a Reply