Media all over America and around the world have trumpeted the news: “2014 Was Hottest Year on Earth in Recorded History,” to quote the New York Times headline.
The Times’s lead paragraph touted this as “underscoring scientific warnings about the risks of runaway emissions and undermining claims by climate-change contrarians that global warming had somehow stopped.”
But it’s time to look below the surface—or, as you’ll see in a moment, above it.
Let’s dispense with the simplest error first.
Does 2014’s performance undermine claims that global warming has stopped? Not at all. When one climbs to a plateau, one remains above the elevations leading up to it until one descends—whether the plateau stretches for a hundred yards or a hundred miles. Global warming can have stopped (and indeed there has been no increase in global average temperature for the last 18 years and 3 months) and 2014 can still have been the “hottest year” (by, as we shall see in a moment, a statistically meaningless hundredth of a degree).
Now we move to more complicated things.
“Recorded history” normally means something like “as long as people have been writing,” i.e., around 5,000 years. Over that time, paleoclimate data show that the Minoan Warm Period (roughly 3,000 years ago), the Roman Warm Period (roughly 2,000 years ago), and the Medieval Warm Period (roughly 1,000 years ago) were warmer than the present.
But in the case of global average temperature (GAT), “recorded history” is actually the period for which we have properly comparable temperature measurements over enough of the globe to justify extrapolating an average.
And what is that? Well, it depends.
The most credible global temperature data—properly comparable throughout the period investigated, and uncontaminated by local anomalies—are obtained by satellite measurement. That stretches back to 1979—which rather deflates “in recorded history.”
The satellite data also deflate “hottest year.” 2014 was only the third warmest in the satellite record, behind 1998 (by 0.15C) and 2010 (by 0.13C). It only beat 2005 by 0.01˚C and the next six hottest years (2013, 2002, 2009, 2007, 2003, 2006) by at most 0.08˚C. Since the margin of error is about 0.1˚C, that makes those differences statistically meaningless.
But the actual “recorded history” behind this headline is surface temperature measurements kept by NASA’s Goddard Institute for Space Studies (GISS) and the National Oceanic and Atmospheric Administration (NOAA). It goes back to 1880—conveniently excluding the three warm periods mentioned above.
As David Whitehouse pointed out, even in these data, 2014 topped 2010 by only 0.01˚C, only a tenth of the margin of error and therefore meaningless.
These data are also far less credible than the satellite data.
Why? Because they’re far from comprehensive and subject to many kinds of contamination via changes in measurement technology, numbers and locations of stations, and compliance—or not—with quality standards.
Consider the last. A major survey of U.S. Historical Climatology Network (USHCN) stations found that only 7.9 percent achieved the Climate Reference Network’s (CRN) two most reliable classifications, with expected errors under 1˚C, while 21.5 percent could be expected to have errors from 1˚C to 2˚C, 64.4 percent to have errors between 2˚C and 5˚C, and 6.2 percent to have errors greater than 5˚C.
Total global warming throughout the entire twentieth century is generally thought to have been only about 0.7º C. The expected errors of well over 92 percent of U.S. stations exceed that by nearly two-fifths, and the expected errors of over 70 percent are three or more times that large.
And U.S. stations (and those of other high-income nations) are generally thought to be the best in the world! How many stations in Africa, Asia, and Latin America from 1880 to yesterday do you think fell into CRN’s top two classifications?
But there are other problems with the surface data. Measuring stations aren’t randomly, and so representatively, placed around the world. Instead, they’re mostly in or near urban areas, which are warmer than non-urban areas. A 2007 study found that about half the apparent increase in global average surface temperature in 1980–2002 was attributable to this “urban heat island” effect.
These and other contaminations make “data homogenization” necessary. Scientists apply various statistical techniques to try to make non-comparable raw data comparable.
How credible are the techniques?
Contrary to what would be expected if errors in the raw data were random (equally likely overstating as understanding actual temperature), the homogenization techniques consistently raise recent andlower earlier raw data, increasing the apparent rate and magnitude of warming. As Georgia Tech climatologist Judith Curry remarked, “The large contribution of adjustments to century-scale U.S. temperature trends lends itself to an unfortunate narrative that ‘government bureaucrats are cooking the books’.”
See how the game’s played? And the normal citizen, even the normal legislator, even the normal legislative aide (who studies these things more carefully than the legislator) is completely unaware.
This is why the Cornwall Alliance for the Stewardship of Creation will continue to warn people against climate policies that would trap billions in poverty and reduce the economic well-being of billions more in the quixotic quest to fight global warming, as we did recently in Protect the Poor: Ten Reasons to Oppose Harmful Climate Change Policies.