Mostly off-topic: Yesterday, Czech President Václav Klaus gave a talk at Johns Hopkins University in D.C. about the EU, the evolution of the recent crisis, and a new case for capitalism: full transcript. In a discussion with the students, our leader has also identified plans for a "world government" to be an utter left-wing cosmopolitan nonsense that he will attempt to annihilate.One month ago, we discussed the
paper by McShane and Wyner (MW)in Annals of Applied Statistics that has demonstrated something we have known for years - namely that the methodology behind the hockey sticks is not a reliable tool to reconstruct the temperatures. The very method is flawed and can be seen to produce hockey stick graphs out of red noise, as McShane and Wyner have explicitly concluded, too.
The journal has just published a Mannian reply,
reply to MW by Gavin Schmidt, Michael Mann, and Scott Rutherford.It's quite incredible: they're clearly completely clueless, or at least they pretend to be.
Once they thank MW for introducing new methods, they say that the methods are bad because MW suffer from a "proper data quality control" and lack "pseudoproxy tests". Apparently misunderstanding absolutely everything that MW have written, the hockey team repeats wrong statements that the proxies should be chosen by "objective standards of reliability", by which they obviously mean the flawed Mannian methods to cherry-pick for the hockey sticks.
But MW have proved - but were not the first ones to prove - that these methods do not produce reliable results. MW use the very data used by the papers by Mann et al. so any criticism of the "[raw] data quality control" is clearly a complaint about Mann's group itself. But MW's analysis doesn't confirm that the Mannian methods produce OK results. Quite on the contrary, they lead to reconstructions that are about as reliable as random guesswork.
Schmidt, Mann, and Rutherford dedicate a lot of their time to secondary technical issues such as annual vs decadal time scales or the selection of the number of principal components, and so on. But they are not ready to look at the fundamental assumptions of their method so they pretend that they haven't heard a word by MW: instead, they clearly want to mindlessly run their rats through their mazes without ever looking whether the methodology is right and whether they have been careful about critical effects that could (and do) invalidate their work.
They're just not willing to understand or they're incapable of understanding that their methods are fundamentally wrong and they can't fix them by changing the value of one number or another.
Also, if Mann says that someone else - who wrote a paper that is all about the quality control and checks of reliabilitty - fails to impose a quality control is just breathtakingly arrogant from a person who has made all kinds of blunders that one can possibly do when it comes to quality control; see Anthony Watts's comments that are mostly about these matters. After all, once again, MW use the same data. They just analyze them more carefully.
The hockey team criticizes MW for not having used pseudoproxy tests. To be sure, pseudoproxies may be important for accurate and reliable climate reconstructions. They are generated from some kind of climate models and they are meant to reproduce realistic patterns of autocorrelations in space and time - and the right "color of noise" - which is clearly desirable when you want to evaluate the accuracy of one method or another.
However, the hockey team authors are clearly unable to distinguish particular proposed climate models from the reality, so they are constant victims as well as perpetrators of circular reasoning. If you test whether a climate model XY will produce self-consistent patterns of variations whose statistical properties may be compared in various ways, be sure that the answer is Yes. But this tautological conclusion doesn't mean that these patterns agree with the reality. It doesn't mean that the model or any of its features is right.
A fully off-topic piece of entertainment to make the atmosphere a bit relaxed: the Slovak and Czech music scenes have finally identified women as being redundant in love duets. ;-)
The misunderstanding of the difference between a model or assumption that has already been verified, and one that has not, is self-evident throughout the text by the hockey team. For example, they include the following cute comment in parentheses:
We note that the term 'pseudoproxy' was misused in MW to instead denote various noise models.Oh, I see, so the term was "misused". However, unless one inserts some unverified or biased assumptions about the past climate, all climate models ultimately have to boil down to "noise models" when it comes to the production of pseudoproxies simply because an unbiased expectation about the past temperatures is a form of noise. It's just the "detailed properties" of the noise that can be chosen more or less realistic.
However, if you produce your pseudoproxies by a particular model, XY, and you use these pseudoproxies to choose your proxies, tests, and to make other choices, all conclusions that are "highly consistent" with XY will obviously get a boost. It's not hard to see that this boost is empirically unjustified: as a piece of evidence, it is completely spurious. After all, if you were assuming a different model, different conclusions would be "favored". You shouldn't be (pleasantly) surprised if a methodology looking for patterns consistent with your assumptions will confirm these assumptions. It's inevitable.
Such an agreement between your assumptions and their consequences is not a "test" in any sense. The only way to test whether the discrepancies between your model and the measured reality (which are clearly nonzero) can be interpreted as a "tolerably small noise" is to compare your model with other models - with other kinds of noise.
While the existing "generic" climate models are bad at predicting the dynamics of the global mean temperature (and they surely want to assume/show that the climate wasn't changing in the past as much as it was changing recently), they're even worse when it comes to the prediction of the "color of the noise" in temperature as a function of time and space (and their combinations). Just think about their ability to predict the regional climate which is even worse than for the global averages. So there's no justifiable reason to favor pseudoproxies produced by your favorite popular models over general noise that is parameterized in a simple way, by its color or autocorrelation exponents. Quite on the contrary: you have to make such comparisons. MW have done so. The result is that the combination of models and methods used by Mann et al. is no better than guesswork with a few tunings.
The criticism directed against MW is thus not substantiated by the evidence. MW have done exactly what an unbiased scientist has to do.
The hockey team's paper doesn't display too much understanding for the important statistical issues raised by MW and others but it does boast a lot of ambitious statements. Near the end, we learn
Assessing the skill of methods that do not work well (such as Lasso) and concluding that no method can therefore work well, is logically flawed.Well, indeed, if you show that one method doesn't work, it doesn't mean that all methods fail to work. However, it's still plausible that all methods in a universality class are invalid. Nevertheless, this is not what MW have argued. They have evaluated pretty explicit methods used by the Mannian papers - and these methods were shown to be no better than guesswork. That's the point that Mann et al. are unwilling to even consider - and surely not listen to.
Indeed, Gavin, if you assume that your climate model XY based on your favorite assumptions is right, you will be able to choose proxies (and also statistical tests!) that agree with it, and claim that there is a "consistent picture". But that doesn't mean that XY must be a right description of reality. Chances are that it is completely wrong. Every step in such an analysis strongly depends on the assumptions - so it can't be viewed as a test whether the assumptions are right because it doesn't really compare different possible assumptions (unless some model agrees 100% accurately and requires no fine-tuning, and we're surely not there yet).
It's clear that Schmidt, Mann, and Rutherford haven't really addressed the main point raised by MW - and others - that the methodology is just guaranteed produce the same result regardless of the actual reality (in this case, of the past climate). They seem completely deaf. It must be annoying for a sensible climate scientist to be a part of a community where a clique of arrogant, combative, and completely deluded zealots who shouldn't be there is clearly and systematically unable or unwilling to listen to rational arguments even if they're presented in the most transparent way.
Also, I am really uncertain whether Schmidt, Mann, and Rutherford have read the MW paper or at least its abstract. They may simply be so afraid of seeing the truth - that they have defended a wrong science for more than a decade - that they don't have the courage to read papers such as MW, or at least they key portions.
However, Schmidt, Mann, and Rutherford are clearly obsessed to emit irrelevant references to the summary for policymakers of an IPCC report or try to find arguments that Al Gore et al. were right right about the "hottest decade in the millennium" (even though their "certainty" is just 80-96 percent). Such thermometer pissing contests and political interpretations may be OK for the popular press, but does this stuff really belong to expert publications, especially to replies to a paper that carefully analyzes some a technical glitch of a method and where the exchange of ideas shouldn't be interrupted by superficial screams from the "popular debate"?
I don't think so.
Schmidt, Mann, and Rutherford simply fail as scientists.
Bonus: Pachauri recommended to resign
Both The Telegraph and BBC run stories about the growing pressure for Pachauri to resign as the boss of the IPCC. BBC also mentions that it was apparently the IAC (InterAcademy Council) panel's intent to achieve a speedy resignation of Pachauri, too. I agree with this hunch, too.
But I also think that if the IAC report were addressed to someone like Dijkgraaf, he would sensitively interpret the homeopathic concentration of the criticism and he would resign. ;-) However, the IAC panel has incorrectly calibrated the required amount of concentration of their criticism as a function of the recipient of their message.
You know, Rajendra Pachauri is more like a Stalin. You have to shoot him thrice and push him with your hand, and only when you do so, there's a chance that he will fall. ;-) Many people fail to realize what kind of power-thirsty monsters have filled some of these chairs.