There's been much ballyhooing about the "hiatus" in the thermodynamically meaningless "global average (dry bulb) temperature" (numerical averages of dry-bulb temperatures are a very poor measure of the physically meaningful and important global heat content). There are even people claiming that it falls outside the range of unphysical model predictions (model predictions are unphysical because important processes are given unphysical parameterizations to deal with computational limits: this does not make the models useless, but their detailed predictve power is necessarily low).

Given we are comparing to unphysical quantities the difference shouldn't come as any great suprise, but putting that aside for the moment, is the "hiatus" even real?

In one corner we have Denialists who can't imagine humanity having a big effect on the climate, claiming that climate models are now in error by more than error.

In the other corner we have Warmists claiming it's all cherry picking and 1998 was an ultra-warm year. They can become quite vociferous about it.

In my professional life I do a lot of robust estimation, so I'm sensitive to issues like cherry picking.

As such, I thought it might be fun to look at the problem like a physicist, rather than a climatologist or an economist. So this is what I did...

The obvious way to avoid cherry picking the start date for any particular fit to the data is to work backward, from today, and see what happens as you either move a window along or extend a window wider, taking in more and more of the past.

The obvious way to fit the data is with a median fitter. Median fitting finds a line such that half the data are higher and half the data are lower. As such, outlier years like 1998 don't count for much. It's hard to make a median fit go wrong.

So I ran a median fit on the most recent Hadcrut data for the thermodynamically meaningless "global average temperature", starting from the present day and working backward.

The first thing I did was simply take in a larger and larger window, going back from the present. I started with a one year window, fitting the monthly data and running the window wider and wider (if I were doing this for publication I'd probably do the yearly data to show that the averaging method is irrelevant, but this is a rainy evening spent with a bottle of wine and I can't be arsed to dig that deeply.)

The result is pretty clear: there is some noise in the smaller windows, but when things settle down there is a real minimum on a decade times-scale. This is clear long before 1998 gets folded into the mix, although you can see the sharp dip that 1998 produces in the graph (the "Year" is the lower end of the fitting window... the upper end is fixed at May 2014.)

So please let's shut up about "cherry picking". That low slope value in the early 2000s? Not cherry picked. It's just there, in the data. It's real.

Now obviously, it would be anti-scientific and wrong to just leave it at that. We see that there is a minimum in the slope of the thermodynamically meaningless "global average temperature" on a one-decade time-scale. There's also a peak on about a thirty-year timescale, and a long-term positive value that looks pretty significant.

If we plot the fitted curves from different window-sizes against the data we can see this effect pretty clearly, especially when compared to the data:

The green and blue curves show fits to the full dataset (back to 1860) and to the past 90 years. As expected, they are not too different.

The purple line for the fit over the past 40 years shows the strength of the recent warming, and the turquoise line for the last decade shows the "hiatus": the fit is almost flat, independently of any particular cherry-picked starting point. I just fit a decade. Eight or twelve years would also have the same result. It's not hugely robust, but it doesn't require selecting one particular instant in time, either.

Science is the discipline of publicly testing ideas by systematic observation, controlled experiment and Bayesian inference. This analysis so far suggests the idea that there is something interesting going on when we look at the last decade. The obvious way to test this is to look at every decade in the dataset. That's what I did next, fitting one-decade windows across the full range of the data. The median slope has some interesting features.

The minimum in the decadal median slope can be see in recent years. This is a real feature of the data, and anyone who denies it is in a state of sin. So is anyone who pays much attention to the anomaly in 1961, which is due to a particularly low period in the early 50's. Even on a one-decade timescale climate is pretty noisy.

But... there are rather a lot of places where the decadal slope is considerably lower than today, eh what?

Furthermore, there seems to be... something of a trend. I committed a sin myself by fitting the first derivative to a straight line (this was not a median fit, but rather least-squares) and getting the green line shown in the graph. The best fit to the overall decadal median slope of the thermodynamically meaningless "global average temperature" is positive--which indicates the slope is increasing--and the size of the coefficient is about ten times the standard error.

So how about that hiatus, eh?