Buy

Books
Click images for more details

Twitter
Support

 

Recent comments
Recent posts
Currently discussing
Links

A few sites I've stumbled across recently....

Powered by Squarespace
« Diary dates, shale edition | Main | No sucker like a green sucker »
Thursday
Jan292015

Marotzke's mischief

Readers may recall Jochem Marotzke as the IPCC bigwig who promised that the IPCC would not duck the question of the hiatus in surface temperature rises and then promptly ensured that it did no such thing.

Yesterday, Marotzke and Piers Forster came up with a new paper that seeks to explain away the pause entirely, putting it all down to natural variability. There is a nice layman's explanation at Carbon Brief.

For each 15-year period, the authors compared the temperature change we've seen in the real world with what the climate models suggest should have happened.

Over the 112-year record, the authors find no obvious pattern in whether real-world temperature trends are closer to the upper end of what model project, or the lower end.

In other words, while the models  aren't capable of capturing all the "wiggles" along the path of rising temperatures, they are slightly too cool just as often as they're slightly too warm.

And because the observed trend over the full instrumental record is roughly the same as the model one we are cordially invited to conclude that there's actually no problem.

If Carbon Brief is reporting this correctly then it's hilariously bad stuff. Everybody knows that the twentieth century is hindcast roughly correctly because the models are "tuned", usually via the aerosol forcing. So fast-warming/big aerosol cooling models hindcast correctly and so do slow-warming/small aerosol cooling models. The problem is that the trick of fudging the aerosol data so as to give a correct hindcast can't be applied to forecasts. Reality will be what reality will be. The fact is we have models with a wide range of TCRs and they have all been fudged. Some might turn out to be roughly correct. But it it just as possible that they are all wrong. Given the experience with out of sample verification and the output of energy budget studies, it may well be that it is more likely than not that they are all wrong.

PrintView Printer Friendly Version

Reader Comments (23)

Does Marotzke believe his own paper? Ignorant, or disingenuous? It's always the same question, the same question.
======================

Jan 29, 2015 at 12:22 PM | Unregistered Commenterkim

15 years! That's so three years ago. The pause is now 18-years which, according to them, must be much less probable than a 15-year pause.

Jan 29, 2015 at 12:23 PM | Unregistered CommenterCeed

[Self snipped]

Jan 29, 2015 at 12:31 PM | Unregistered CommenterNCC 1701E

These guys must spend everyday wishing that they could use such 'techniques' for the lottery for that way they could become very rich indeed. Meanwhile when you play heads you lose tails I win ' is it of course very easy to claim your always right , but it remains f all to do with science.

Jan 29, 2015 at 12:41 PM | Unregistered CommenterKnR

Absolutely brilliant!

Using the hindcasted underestimation of the 1930s warming by the models to balance off the serious overestimation of the recent plateau to come up with the conclusion that the models are right on target is a masterpiece of climate statistics. Who else could spin two flaws into such a positive result?

Like a golfer who leaves half of the putts well short of the hole and the rest much too long claiming that his putting was very good!

Jan 29, 2015 at 12:58 PM | Unregistered CommenterRomanM

What is so magical about 1985? Why does the trend start in 1985 (or there abouts)?

Jan 29, 2015 at 1:24 PM | Unregistered CommenterJeff Norman

Natural variability means - never having to admit you're wrong.

Jan 29, 2015 at 1:35 PM | Unregistered CommenterTinyCO2

Have the salaries of the IPCC' top experts failed to rise with the hiatus, or continued to rise in accordance with the failed IPCC predictions for global warming?

Jan 29, 2015 at 1:48 PM | Unregistered CommenterGolf Charlie

TinyCO2, Admitting natural variability hits one of the main tenets of their CAGW theory: that man-made CO2 will cause catastrophic global warming. Natural variability large enough to overcome AGW clearly means we no longer have to panic and spend £Trillions.

Jan 29, 2015 at 1:57 PM | Unregistered CommenterBudgie

Are the computer models reliable?

Computer models are an essential tool
in understanding how the climate will
respond to changes in greenhouse gas
concentrations, and other external effects,
such as solar output and volcanoes.

Computer models are the only reliable
way to predict changes in climate. Their
reliability is tested by seeing if they are able
to reproduce the past climate which gives
scientists confidence that they can also
predict the future.

UK Met Office publication: Warming Climate change the Facts

Jan 29, 2015 at 2:09 PM | Registered CommenterMartin A

There's what the climate models predict will happen and then there's ...

https://www.youtube.com/watch?v=azxoVRTwlNg

Pointman

Jan 29, 2015 at 2:26 PM | Unregistered CommenterPointman

Thanks RomanM, exactly what I was about to say.
Since 1950, the only times when the models' trends are below actuals, occur when the late end of the interval is affected by a large volcanic eruption, to which the models are clearly overly sensitive. The effect exaggerates the reduction in the models' OLS trend, causing them to dip temporarily below actual.

To get a good idea about models running hot, one needs only to look at the reference period 1961-1990, where by construction the mean of the each model matches the mean of the observed. At the start of the interval, most models are below the observed; by the end, most are above.

Jan 29, 2015 at 2:30 PM | Registered CommenterHaroldW

Martin A
I have always wondered whether that last sentence that you highlighted is a massive leap of faith or an equally massive non sequitur
As I said in a very early post on here I was once introduced to a horse race betting system that (in simple terms) meant backing the winner or runner-up in a specific race the next time it ran. The five-year results that were used to prove the system gave (IIRC) about 60% success rate at average odds of about 4/1. Needless to say these results were not replicated after the mug had parted with his money. I see precious little difference between the tipster's efforts at hindcasting and the climateers'. Basically any fule can do it and why anyone with the standard issue of brain cells should assume that past performance in a situation as chaotic as climate should be a more reliable guide to the future than past performance in the marginally less chaotic system of horse racing beggars belief.
It's a bit like tweaking your models to make 2+2=4 and then proclaim that on that basis and given the latest data 2+2 will equal 5.07 to a 95% confidence level in three years time.

Jan 29, 2015 at 2:41 PM | Registered CommenterMike Jackson

The fact is we have models with a wide range of TCRs, many of which go to 11.

Actually, I'd prefer to say they have models, not we.

Jan 29, 2015 at 2:42 PM | Unregistered Commentermichael hart

I see no advance here from the 2009 postulate of Swanson & Tsonis* wherein the temp rise could be assumed linear plus ocean cycling that causes large natural deviations from the line. Whether this underlying trend was natural (eg a Bond Event, or recovery from the little ice age) or manmade (CO2, land-use) of course is impossible to say (though Swanson jumped that shark anyway while Tsonis was more correctly guarded). However one thing we can say now for sure is that those pessimistic parabolic increases in temperature that stem entirely from the initial puerile assumptions that the models were adequately accurate over short timescales are definitely out and hence we can expect only a continuation of the very mild 0.6K/century if it is manmade and even a possible decline (thanks to the pdo cycle reversal) if the underlying upward trend is natural.

*http://www.realclimate.org/index.php/archives/2009/07/warminginterrupted-much-ado-about-natural-variability/

None of this of course affects the IPCC conclusions in AR5 that nothing much unusual or unnatural is detectable yet in any of the the data (except possibly at the poles) and all circularly reasoned attributions and consequent hand-wringing are either hugely speculative or entirely bogus.

Jan 29, 2015 at 3:55 PM | Unregistered CommenterJamesG

First para:

Readers may recall Jochem Marotzke as the IPCC bigwig who promised that the IPCC would not duck the question of the hiatus in surface temperature rises and then promptly ensured that it did no such thing.

Shouldn't that read: "..promptly ensured that it did."?

Jan 29, 2015 at 5:10 PM | Unregistered CommenterCharlie Flindt

Bishop, do I take it from your reference to Carbon Brief that you have not read the actual paper, or even the abstract? It's worth doing so if you haven't.

The first part simply compares the various 15yr trends from the observations and the models. But the second part addresses precisely your concern about forcings and feedbacks.

It does this based on a linear energy balance equation, essentially the same one Nic Lewis uses. So, to the extent you trust the simple linear equation, the conclusion is that the current pause has very little to say on whether the models are too sensitive.

Jan 29, 2015 at 6:03 PM | Unregistered Commentertilting@windmills

Martin A
I have always wondered whether that last sentence that you highlighted is a massive leap of faith or an equally massive non sequitur
.....
Jan 29, 2015 at 2:41 PM Mike Jackson

Well, I imagine they do believe it - it has often been repeated by Met Office staff, including here on BH.

For decades, engineers have recognised the fallacy of "testing on the training data". If you make a system by using samples of statistical data to "tune" or "train" it (essentially you optimise the parameters of the system so that it produces the requred behaviour) then, if you use the same data to test the system, you will get hopelessly optimistic estimates of its performance. This applies even if the test data is not identical to the training data but does have statistical dependency on the training data, even if the dependency is quite slight.

In making climate models, past observations are used in all sorts of ways to "parameterise" the simple formulas that represent the parts that are not well understood and also to adjust in various ways the parts that are moderately well understood. Information from observed data creeps in all over the place, including affecting the assumptions made and then incorporated into the models in ways that the creators of the models may not even be conscious of.

No audit trail is kept of the way past data is used in creating climate models . It would be impossible to take one of today's climate models and eliminate from it any information that had been derived from observations of atmosphere and climate after, say, 1970 (or any other specific date, up to the present day). So any testing of such models against past climate invariably involves the "testing on the training data" fallacy.

What to make of the Met Office's comment, clearly intended to be take seriously? I think that, to be charitable, notwithstanding the cost of their supercomputers, and the hundreds of qualified staff involved, they don't know their arse from their elbow when it comes to using computer models to predict future climate. Or at least in recognising and admitting their inability to validate their models.

Jan 29, 2015 at 6:51 PM | Registered CommenterMartin A

The "reality" to which the climate models are tuned to is largely composed of surface temperature data sets. For a number of countries (Iceland, Australia, USA and more recently Paraguay) there is evidence of adjustment biases that tune the empirical data to the models.

Jan 30, 2015 at 12:31 PM | Unregistered CommenterKevin Marshall

@ Jan 29, 2015 at 12:58 PM | RomanM

well, THAT is an excuse I'm going to use this coming golf season !

I can even go further ... at the 18th, and checking the scores, I might try to persuade my marker to write 1 less for every hole ...

one I putted above the hole, one I left below ... that amounts to actually 2 IN !

Statistics, you know !

single handicap, here I come !

Jan 30, 2015 at 3:26 PM | Unregistered Commenterducdorleans

ducdorleans:


Unfortunately, it only works when you are playing with climate scientists. ;)

Jan 30, 2015 at 7:03 PM | Unregistered CommenterRomanM

First, there are several inter-related problems here. To me "tuning" a climate model is equivalent to falsification of science. Their untuned models have never correctly forecast any future climate, starting with Hansen's "business as usual" model in 1988. He attempted to forecast coming global temperature through 2019. We have seen most of these years and all of his predictions have been wrong. Some thought that using supercomputers instead of an IBM mainframe that Hansen used would improve the accuracy but this did not happen. The output from supercomputers running one million line code was no better than Hansen's work was in 1988. And once the cessation of warming dubbed a "hiatus" appeared the computers got lost entirely. They have had 27 years since Hansen to get their house in order and this has not happened. It is time to recognize that computer prediction of future climate cannot be done and close the modeling operation entirely. Now lets take up the Marotzke and Foster claims. Ever since the existence of the hiatus became known there have been numerous attempts to explain it away. These were often peer reviewed scientific articles. Anthony kept track of them but when their number was over fifty he gave it up. To me the most fascinating ones were those that were looking for the lost heat in the ocean bottom. I find it hard to explain how such nonsense got past the peer review process. But then again, my personal expereience with them should explain it in part. When Al Gore claimed that a twenty foot sea rise was heading for us I found an article in Science proving that sea rise for the previous 80 years had been 2.46 millimeters per year. That was under ten inches, not twenty feet per century, and I quickly submitted an article about it to Science and to Nature, in succession. In both cases it was returned without bothering with such niceties as a peer review, and Al Gore got a Nobel Prize for his nonsense. The current attempt to deny the existence of the hiatus is a new twist towards unreality by the core climate science establishment. You may not know it, but this is not the first but the second time that these guys have suppressed or attempted to suppress the existence of a hiatus. The first time was in the eighties and nineties and it was successful. GISS. NCDC, and HadCRUT cooperated to create a fake warming for that temperature stretch. It amounted to a fake rise of 0.1 degrees from 1979 to 1997. Proof of collaboration is identical computer footprints in all three data-sets. The fake warming and computer footprints are not present in satellite temperature curves. I discovered this while writing my book "What Warming?" (Amazon carries it) and even put a warming about it into the preface. But nothing happened and they just continued their operation into the twenty-first century. They are continuing with a fake upslope on top of the current hiatus which is so ridiculous now that their 2010 El Nino is higher than the super El Nino of 1998. That is impossible. And from this it was only a small step for them to declare 2014 the warmest year ever. During the eighties and the nineties ENSO was active and created five El Nino peaks during that period, with La Nina valleys in between. This gave the fakers cover because the existence of a wave train my confuse people trying to determinr global mean temperature. Figure 15 in my book shows how it is done. First, use a wide marker to cover the fuzz around the trend line. This will give you the outline of the wave train. Next, put a dot at a point halfway cbetween the El Nino peak and the bottom of the neighboring La Nina valley. These dots mark the location of the global mean temperature at that date. I did this for the entire wave train and the dots lined up in a horizontal straight line, proving absence of warming for 18 years. These 18 years should be added to the 18 years of the current hiatus, giving a total of 36 years of no-warming since 1988. This takes up three quarters of the time that has lapsed since the IPCC was formally established. Any temperature changes that have taken place have to fit within the twelve year time slot not taken up by any hiatus.

Feb 1, 2015 at 3:27 AM | Unregistered Commenterarno-arrak

I reckon that the modeled aerosol cooling the 1970's is specious. Because the cooling is largely due to increased La Nina episodes and cooling of the AMO, while inhibiting insolation to the surface with aerosols will promote El Nino conditions as a negative feedback. This also happens with larger volcanic eruptions.
I suggest that the increased La Nina and the cooling of the AMO 1973-76 are wind driven negative feedbacks to fast solar wind during the period:

http://snag.gy/hSqT4.jpg

And the accelerated warming of the AMO and Arctic since the mid 1990's are a negative feedback to the general decline in solar wind pressure/density since then. With weaker solar forcing plasma increasing negative North Atlantic Oscillation conditions:

http://snag.gy/dXp1s.jpg

Feb 28, 2015 at 8:07 PM | Unregistered CommenterUlric Lyons

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>