Interesting. How exactly do we know? After all, the Thermometer was only invented in the early 17th Century. There's a chance - albeit a slender one - that the measurements show that 1998 was the warmest year in 350 years. How do they know what the temperature was before that?
Easy, say the Climate Warming Crowd. There are lots of "proxies" - other measurements that map pretty well to temperature. Tree rings will vary - growth will typically be faster in warm years, slower in cold ones. Ice cores, pollen counts from cores drilled into prehistoric bogs, even harvest records from medieval monasteries or Imperial Chinese court documents. These are reasonable proxies - everyone agrees on this.
So we have direct temperature readings for 100 or 200 years (the data is surprisingly weak when you go back more than 60 or 70 years). There are tree rings that go back maybe a thousand years. Ice cores will take you back tens or hundreds of thousands of years.
Ah, but these are different types of data. How do you put them together? Splicing:
Splicing data sets is a virtual necessity in climate research. Let’s think about how I might get a 500,000 year temperature record. For the first 499,000 years I probably would use a proxy such as ice core data to infer a temperature record. From 150-1000 years ago I might switch to tree ring data as a proxy. From 30-150 years ago I probably would use the surface temperature record. And over the last 30 years I might switch to the satellite temperature measurement record. That’s four data sets, with three splices.What's tricky is how you join them. You don't want big discontinuities in the record occurring where the data sets are spliced. Data sets are calibrated, or zeroed to try to make sure that the record stays smooth. This can be tricky, and can lead to False Positive results - reporting that something is happening, when in reality it's just an artifact of the data instrumentation:
But there is, obviously, a danger in splices. It is sometimes hard to ensure that the zero values are calibrated between two records (typically we look at some overlap time period to do this). One record may have a bias the other does not have. One record may suppress or cap extreme measurements in some way (example - there is some biological limit to tree ring growth, no matter how warm or cold or wet or dry it is). We may think one proxy record is linear when in fact it may not be linear, or may be linear over only a narrow range.So back to the "Warmest year in 1000 years" headline. Remember Mann's "Hockey Stick" graph, the one from Al Gore's movie An Inconvenient Truth? The temperature was pretty stable until 150 years ago, and then it spiked, remember? What happened 150 years ago?
Well, say the Climate Change Crowd, the Industrial Revolution started cranking out, well, industrial quantities of Carbon Dioxide into the atmosphere. All that CO2 is what's to blame, and they have computer models to show it. Fair enough. Ignore the many problems with with models. The Industrial Revolution was built on steam power, which was driven by Coal. And it did hit its stride around 150 years ago.
But is there anything else that happened around 150 years ago? Why, yes. The temperature data sets changed from proxies to actual thermometer readings. The sudden upswing in average global temperature is entirely from a different set of measurements than the earlier data sets. Entirely.
So, could this be a False Positive, an artifact of splicing two different data sets together. Y es it could be. In fact, it's likely that this is the case, and that's why you don't hear the Climate Change Crowd talk about Hockey Sticks and "Global Warming" anymore. Want proof? What if we ignore the thermometer readings, and just look at the proxy temperatures? We have tree ring data that goes right up to the present - why stop 150 years ago? What does it tell us about recent climate?
So, the proxy data is not accurate enough to get reported by the Climate Change Crowd, but it's plenty accurate enough to show that 800 years ago was cooler? Selection Bias, anyone?
You can see that almost all of the proxy data we have in the 20th century is actually undershooting gauge temperature measurements. Scientists call this problem divergence, but even this is self-serving. It implies that the proxies have accurately tracked temperatures but are suddenly diverting for some reason this century. What is in fact happening are two effects:
- Gauge temperature measurements are probably reading a bit high, due to a number of effects including urban biases
- Temperature proxies, even considering point 1, are very likely under-reporting historic variation. This means that the picture they are painting of past temperature stability is probably a false one.
All of this just confirms that we cannot trust any conclusions we draw from grafting these two data sets together.
And if you think I'm harsh accusing the scientific community of selection bias, how about this little tidbit about one of the proxy-based data sets:
When you combine this with repeated errors in the reported data - and with total refusals to release the data for scrutiny - you should be very skeptical of any claims about climate change. Any.
For some reason, the study’s author cut the data off around 1950. Is that where his proxy ended? No, in fact he had decades of proxy data left. However, his proxy data turned sharply downwards in 1950. Since this did not tell the story he wanted to tell, he hid the offending data by cutting off the line, choosing to conceal the problem rather than have an open scientific discussion about it.
The study’s author? Keith Briffa, who the IPCC named to lead this section of their Fourth Assessment.
There is a massive, ugly problem with data integrity concerning climate change. Rather than being a done deal, things are getting curiouser and curiouser. Settled? You must be kidding. The science is getting very interesting indeed.
UPDATE 26 November 2009 19:01: More about Dr. Biffra here.