Buy

Books
Click images for more details

Twitter
Support

 

Recent comments
Recent posts
Links

A few sites I've stumbled across recently....

Powered by Squarespace
« Climatologists go open | Main | Environmentalists trashing the environment »
Wednesday
Mar212012

Hopeful fudging

I briefly met Chris Hope of the Judge Business School at the Cambridge Conference last year - Chris works in climate policy if I remember correctly. He has just started following me on Twitter and sent a tweet in response to reader Andrew's guest piece on mathematical models.

He was picking up on the statement that "invariably [models] require 'tuning' to real world measurements" and responds:

Aren't climate scientists criticised for this?

This seemed a reasonable point to me.

 

PrintView Printer Friendly Version

Reader Comments (43)

I have a set of data points and fit a curve to them as best I can. You always have the data points as your reference frame. The fit is never perfect, but that is part of the "art" of engineering.

The data points though are actual measurements. Measurements taken as far across the envelope as is possible. And in an aviation setting you have to make sure you get as close as possible to them. Extrapolation beyond those data points is life threatening.

It is perfectly reasonable.

How is that similar to climate models and their predicting the "future".

Mar 21, 2012 at 8:20 AM | Unregistered CommenterJiminy Cricket

It's that old devil semantics again! 'Tuning' a model to comply with new factual evidence must be "A Good Thing". 'Tuning' a model to suit your preconceived ideas is "A Bad Thing".

Mar 21, 2012 at 8:35 AM | Unregistered CommenterDavid Duff

Tuning to the global average temperature thing is not really to a real world measurement per se, it is to a suspect number, and predicting the future of that number is all very well, but it is not an indicator of skill. Tuning by el nino, AMO, PDO is something else again. there is no measurement involved, except the timing of a state change. Can they hindcast the state changes? Can they then forecast them better than by eyeballing a graph?

Mar 21, 2012 at 8:36 AM | Unregistered CommenterRhoda

Aren't climate scientists criticised for this?

Certainly a fair question. I'll ask my own; Have any models been tuned to the divergent tree record?

More important than models getting tuned is what evidence the tuning is informed by.

Mar 21, 2012 at 8:47 AM | Unregistered CommenterGareth

To be successful, modelling must include enough of the laws of physics without omission of a major factor. I have found empirical evidence of 99-year and 150-year timelags between changes in solar activity and response by the AMO and PDO respectively.

http://endisnighnot.blogspot.com/#!/2012/03/lets-get-sorted.html

If this speculation does correspond to a major factor (I'm working on a testable hypothesis to explain this), its omission from the models invalidates them.

Mar 21, 2012 at 8:47 AM | Unregistered CommenterBrent Hargreaves

Hmm. Messed up me blockquotes there.

Aren't climate scientists criticised for this?


Certainly a fair question. I'll ask my own; Have any models been tuned to the divergent tree record?

More important than models getting tuned is what evidence the tuning is informed by.

Mar 21, 2012 at 8:51 AM | Unregistered CommenterGareth

But the latest post by Bob Tisdale at WUWT shows that the models don't get the past right.

Mar 21, 2012 at 8:56 AM | Registered CommenterPaul Matthews

Mar 21, 2012 at 8:51 AM | Gareth

Have any models been tuned to the divergent tree record?

No - it would be a silly thing to do given that we have meteorological observations.

More important than models getting tuned is what evidence the tuning is informed by.

Meteorological observations (including both surface measurements and also satellite measurements of the planetary energy budget - getting the latter right is critical in order to prevent the model drifting spuriously.)

BTW GCMs are only tuned against the present-day state and not against the historical change - that would be too hard as it takes too long to keep repeating the simulations, and would also generate a circular argument in using the models for explaining past changes.

Cheers

Richard

Mar 21, 2012 at 9:07 AM | Registered CommenterRichard Betts

"But the latest post by Bob Tisdale at WUWT shows that the models don't get the past right."

They don't and never have, they have consistently given higher temperatures. The methodology for getting them right is to introduce aerosols into the models, "tuning" them I suppose. The problem is that if you have a model which uses aerosols as a cooling factor, and your hindcast shows the models warmer than the actual recorded temperatures then increasing the aerosols to get them to the right temperature isn't very scientific, in my view anyway.

You have to have pitry for the poor old modellers though, no sooner have they fudged the models to fit the recorded temperatures but along come GISS and CRUtem with "adjustments" downward of the earlier recorded temperature in their attempts to "tune" the temperature records to show warming in the late 20th and now early 21st centuries.

Oh what a tangled web...

Mar 21, 2012 at 9:11 AM | Unregistered Commentergeronimo

The tuning is done by inventing imaginary corrections.

The first is that they claim the optical depth of low level clouds [which contribute most cloud albedo] is twice the real level. In this way they disguise the total cock up of the heat transfer assumptions which magnify real energy input by a factor of nearly three on the basis of imaginary 'back radiation' from GHGs absorbing IR and heating the air.

The second is to claim that pollution makes clouds reflect more solar energy when in reality the balance of two effects causes cloud to reflect less.

With this number of disposable parameters plus the latest data fiddling by the CRU guys you could even hide the decline: tell your contact he is lives amongst a nest of scientific vipers and thieves.

Mar 21, 2012 at 9:25 AM | Unregistered Commentermydogsgotnonose

Have any models been tuned to the divergent tree record?

No - it would be a silly thing to do given that we have meteorological observations.

Richard, the orginal question was a valid one in that if the Hockey Stick was so accurate why not use it as a test for the models.

Your answer says a lot about the modellers thoughts on the accuracy of the proxies used in the Hockey Stick and all the related papers.

Hope you are not on Michaels Christmas card list ;)

Mar 21, 2012 at 9:25 AM | Registered CommenterBreath of Fresh Air

that would be too hard as it takes too long to keep repeating the simulations

Careful Richard...jokes about drunks and lampposts might spring to mind! 8-)

Mar 21, 2012 at 9:27 AM | Registered Commenteromnologos

I suspect your twitee Chris Hope has a very naive view of climate models if he thinks that tuning is somehow 'not the done thing'. Looks like he has specialised in believing GCM 'projections', as selected and refined by the IPCC as best suited for its purposes, and then developing the policy implications of them. I guess there were good, conscientious folks like him in royal courts in the Middle Ages, developing contingency plans for their monarchs every time a millenial cult attracted a decent following, so that talk of approaching doom was all the rage. Plus ca change....

Mar 21, 2012 at 9:56 AM | Registered CommenterJohn Shade

Climate scientists are criticized for tuning the the data to the models, which isn't quite the idea.

Mar 21, 2012 at 10:02 AM | Unregistered CommenterSandy

It seems obvious from the quote, there is a difference of opinion on the definition of "real world measurements"

Andrew also stated "One big problem with ascertaining the accuracy of computer simulations is that you generally have to have some idea of what the answer should be, so that you can compare the calculated solution."
And we all know from observations- including climategate, the latest CRU temp output for rio, ie. yet another revision of the past to provide the "correct" mean for 'the cause', and the plethora of climate propaganda, the idea the "team" have as to what answer the answer should be.

Andrew also stated "I hope it can be seen that this process is 'absolutely riddled' with scope for errors, incorrect assumptions, and erroneous simplifications." Which is where some of the pertinent "criticism" Chris Hope, (somewhat piously it seems imagines is not valid) originates.

Further quotes from Andrew Chris hope neglected to mention

"Even if the model does converge to a solution - it does not mean that this is a correct (or accurate) one."

"Bear in mind that the process (simplistically) outlined above must be undertaken for each physical attribute being investigated, and it can be seen that this is a hugely non-trivial problem (for an atmospheric model)" Known unknowns, unknown unknowns included/omitted from these "models" anyone?

If Chris Hope really wants to engage, he would engage here, rather than sniping pious one liners via twitter IMO.

Mar 21, 2012 at 10:03 AM | Unregistered CommenterFrosty

The physics of the models should be adequate so that tuning is not necessary. I have to agree with Richard in that all that is needed is that the initial conditions are in reasonable agreement with the today's measurements.

Mar 21, 2012 at 10:05 AM | Unregistered CommenterPhillip Bratby

Climate modellers, like any modellers will be criticised for claiming the "hindcasts" represent any test of their efficacy.

The "fine tuning" to past data is the reason why they are subject to such criticisms.

Fit your model to historical data, fine. But the result is simply a fit to historical data, not a tool with any proven predicive ability - which means it does not yet provide any validation whatsoever for the theories built into it.

Mar 21, 2012 at 10:28 AM | Unregistered CommenterGeckko

A very good on the subject is:

http://www.amazon.com/The-Predictors-Maverick-Physicists-Fortune/dp/0805057579/ref=sr_1_1?ie=UTF8&qid=1332325586&sr=8-1

Besides being highly entertaining, I found it useful in informing me on the capabilities of models when applied to complex systems. The by-line is misleading though. They did not actually "trade their way to a fortune". They ultimately failed in their attempts.

Mar 21, 2012 at 10:28 AM | Unregistered CommenterWill Nitschke

I think you have to be careful with the meaning of the word "tuning" the model to get it right. In my world of modelling I have a number of parameters I may need to set correctly in order for the forward model prediction to match the data. You could call this "tuning" or you could call it "calibration" or some such. However, the parameters I use for this are physical, ie they form part of the physics of my model and I have a good idea of the range for the parameter and can therefore see when the "tuning" may be non-physical and therefore invalid. This would then set off warning bells that there is something happening with the data that means the selected forward model I have used is not appropriate.

However, there is another form of tuning and I suspect this must occur with climate models - these are not really tuning but "fudge factors". The aerosol parameter use to make cooling from the 1940's to the 1970's is probably one of these. It sounds ok, but its really an arm wavy, unknown factor that is arbitrarily added to the model to improve the fit. Given the complexity of the climate model problem and the complexity of the equations, there must be many "tuning" parameters like this that do not have proper physical justification. "Tuning" a badly defined model with free parameters to make it fit the data is a tautology. The model may fit the data, but it will be a very poor predictor at unmeasured locations. But the model may appear better because the introduction of the extra parameter makes the fit to observations improve and the residuals (erroneously called "uncertainty") go down.

In this context a forward model that fits the observations is a requirement of a good model, but it is not a sufficient justification that the model is any way valid. Model validity is tested by making predictions at unmeasured locations (or times eg the future) and then comparing with the actual result at that location (or future time).

An example would be fitting a high order polynomial function to data on a crossplot. If the actual relationship is essentially linear, but the data have noise and hence are a bit scattered, the high order polynomial will fit to the noise as well as the trend. Comparison to observations shows the fit to be "better" (smaller residuals) when using the polynomial rather than a straight line, but the predictive capability of the polynomial when extrapolated will be very poor and unstable. I strongly suspect this is analogous to the problem with climate models.

Mar 21, 2012 at 10:37 AM | Unregistered CommenterThinkingScientist

I used to love placing the decals on my models.

Mar 21, 2012 at 10:43 AM | Unregistered CommenterAnoneumouse

Here's a thought. Since

(1) as far as I know, the climate system has been behaving pretty much as if the growth in ambient CO2 levels was of negligible effect

(2) simpler models (as per our old chum Occam) are generally preferable over unnecessarily complicated ones

Would it not be a good idea to set the models up with no linkage at all, zero, zilch, to rising CO2 levels, and then see what tuning can be done, and what kind of predictive skill they present? I'd also like to see cloud parameterisation linked in various ways to speculations or forecasts of cosmic ray levels as one of the things to play with.

But as far as I know, all the models have been been funded to pursue the supposition that additional CO2 is a big deal. For some reason, governments seem hugely exercised by that, and have been happy to cough up lots of loot for it. Is there anyone rich enough to fund alternative modelling fun and games? Where's Big Coal et alia when you need them?

Mar 21, 2012 at 11:06 AM | Registered CommenterJohn Shade

C'mon everyone, be nice. Chris has made a point - hardly sniping.

Mar 21, 2012 at 11:18 AM | Registered CommenterBishop Hill

RE: Richard Betts:

"BTW GCMs are only tuned against the present-day state and not against the historical change - that would be too hard as it takes too long to keep repeating the simulations, and would also generate a circular argument in using the models for explaining past changes."

I can understand and agree with this statement but it begs further questions:

1. Do you choose a baseline date in which you have many observations for the model intialisation or do you use a range of modern era dates for model initialisation? The latter would make the fit in the present look rather good.
2. How can you be sure that there is sufficient data at the chosen initialisation time and how sensitive are the model runs to initial conditions?
3. If you add just a few extra data points to the intialisation state, how much difference does it make to the run?
3. Are the models actually run backwards (time step negative)?
4. How many parameters are available to be adjusted in a typical GCM? Here I am thinking of the more arbitrary and perhaps less physical parameters such as "increases in aerosols 1940 - 1970" or similar?
5. Is it possible to get a good fit to data by adjusting the parameters referred to in (5) whilst neglecting the CO2 warming effect?
6. Is it possible to get a good fit to data by removing the strong positive feedback of water vapour?
7. How do you update the models to get a good fit on hindcast now that the historical temperatures have been made colder again in the latest global temperature series and the slope of temperature change over the C20th has increased as a result? Which parameters would be changed to make this fit?

Lots of questions I know, but would be interesting to hear more about these models.

Look forward to hearing your responses.

Mar 21, 2012 at 11:47 AM | Unregistered CommenterThinkingScientist

Phillip Bratby

...all that is needed is that the initial conditions are in reasonable agreement with the today's measurements
I agree, but what then? You actually have to tell it to do something otherwise it will just sit there and stare back at you. Is the problem not what the modellers tell it do?
You can say "apply the laws of physics to these bits of data" but if (as, for example, mydogsgotnonose claims) there is a dispute about what the laws of physics actually are and climate scientists are misunderstanding the optical depth of clouds or — as has suddenly become popular in certain quarters under the generic term 'slaying the sky dragon' — we have totally misunderstood the relationship between atmosphere and space or various other arguments that seem to be developing about the positivity or negativity of feedbacks how can we possibly have any confidence in the output from the model?
And when you start making claims like "without CO2 it doesn't work" and even with CO2 it doesn't work unless you start to fling in arbitrary figures for aerosols, why should anybody believe a word you say?
My response these days to the argument that the Climategate emails "don't alter the science" is, "what science would that be, then?"

{The 'yous' are not directed at you, personally, Phillip, you understand!}

Mar 21, 2012 at 11:57 AM | Registered CommenterMike Jackson

Is not Chris Hope thinking that the models so tuned provide evidence of climate behaviour?
If so then Hope does not understand that models, however well tuned, are not evidence of anything.

Mar 21, 2012 at 11:59 AM | Unregistered CommenterPhilip Foster

Richard Betts said:

No - it would be a silly thing to do given that we have meteorological observations.

Yes. It was a snarky suggestion as trees are not thermometers.

Thanks for the explanation on tuning. It's not what I expected though. Do you mean the models are tuned to best achieve the present day conditions if started from some earlier date rather than to reasonably ape the instrument records?

Mar 21, 2012 at 12:21 PM | Unregistered CommenterGareth

But the latest post by Bob Tisdale at WUWT shows that the models don't get the past right.

Mar 21, 2012 at 8:56 AM | Paul Matthews

No problem. They just change the past to fit the models. Easy.

Mar 21, 2012 at 12:28 PM | Unregistered CommenterJimmy Haigh

How many climate models, in 2000, were predicting that the global average temperature would level off for the next decade?

Mar 21, 2012 at 1:13 PM | Registered CommenterMartin A

It seems to me that the data is tuned to make the model output the desired confession...

A bit like the inquisition, the result is decided before they ask the questions, it's the data being tortured for now, but with green fascism who knows how long it will just be the data.

Mar 21, 2012 at 1:38 PM | Unregistered Commenterac1

Mar 21, 2012 at 1:38 PM ac1

..... make the model output the desired confession...

I think that's the thing. You can tweak a model to enable a simple model fit a complicated reality (eg you fit a second order differential equation to a thing that has umpteen resonant frquencies, because you are only interested in what happens around the principal resonant frequency where the thing might shake itself to bits).

That's a different matter from tweaking a model until it gives the "right" result.

Mar 21, 2012 at 1:54 PM | Unregistered CommenterMartin A

Pielke Snr keeps a close eye on model verification matters, and is generally profoundly unimpressed by the models as vehicles for prediction. He has a post on his site today exposing a recent article on model calibration (aka tuning, or by some as fudging) as little more than a fund-raising effort: http://pielkeclimatesci.wordpress.com/2012/03/21/comments-on-the-bams-article-calibration-strategies-a-source-of-additional-uncertainty-in-climate-change-projections-by-ho-et-al-2012/

He concludes 'these studies are not only a waste of money and time, but are misleading stakeholders and policymakers on our actual knowledge level of the climate system. In terms of the image that I posted at the top, this means that model runs used to create that figure (and others like it) are not scientifically robust.'

I wonder what Chris Hope's 'knowledge level' is of what 'our actual knowledge level of the climate system' is? Not to single him out particularly - he just happens to have triggered this post and thread of comments. The question applies to all those who explore the implications of IPCC alarums as if they were worth taken very seriously indeed.

Mar 21, 2012 at 2:18 PM | Registered CommenterJohn Shade

Genuine scientists, not to mention genuine modelers, do not "tune" anything. Yes, the problem here is semantics. But Climate Scientists introduced the problem. Only modelers who do not know what they are doing or who are hiding what they are doing would use the word 'tune'.

The word 'tune' is used to hide the fact that the modeler has no clue as to the logical or methodological relationships between his model and his independently verified data.

A modeler might want to know if his model can reproduce his past data. The modeler might rejigger his model for years until it can reproduce the past data. Having "tuned" his model, what he has accomplished, hopefully, is that he now has a computer program that is highly efficient at organizing data, both past and future, and that permits him to create scenarios regarding past and future.

However, he is completely mistaken in the belief that this "tuning" amounts to something more than fitting a curve to data on a graph. He is completely mistaken in his belief that the ability to reproduce past data "validates" the model in any way whatsoever. He is completely mistaken that the "validated" model can now be used for prediction. The word 'tune' hides all these facts.

You would think that at some point just plain old embarrassment would stop climate scientists from claiming virtue for 'tuning'. Imagine a conversation:

Critic: Do tell me, Sir, what are the logical or methodological relationships between your model and your data?
Climate Scientist: Oh yes, well, my model has been tuned to my data.
Critic: Would you care to explicate this idea of tuning?
Climate Scientist: That is what modelers do. You have to engage in modeling as we do if you are to appreciate tuning.
Critic: Ah, I see.

Mar 21, 2012 at 2:23 PM | Unregistered CommenterTheo Goodwin

Mar 21, 2012 at 9:07 AM | Richard Betts

"BTW GCMs are only tuned against the present-day state and not against the historical change - that would be too hard as it takes too long to keep repeating the simulations, and would also generate a circular argument in using the models for explaining past changes."

Great Caesar's Ghost, someone who is willing to talk about scientific methodology. In my humble opinion, no model has ever explained anything and none will. In science, if "X explains Y" then "X" is a set of well confirmed hypotheses that can be used in combination with statements of initial conditions to predict "Y." The logical relationship is one of implication. The combination of hypotheses and initial conditions genuinely imply the prediction. The methodological relationship is that hypotheses are tested through each and every prediction. I take it that "tuning" accomplishes none of this.

I do not mean to pick on you, Dr. Betts. You remain the last great hope of Climate Modeling. Also, I am aware that when referring to models you are usually referring to your own work which seems far more reasonable than the usual fare for criticism on this site.

Mar 21, 2012 at 2:41 PM | Unregistered CommenterTheo Goodwin

Most of the time, I see climate scientists deny tuning at all. The models are simply based on physics. Tamino said they are based on everything we know about the physical world, and curve fitting is absurd description.

On the other hand, in one post RealClimate casually mentions a model tuned to produce a 3C sensitivity.

Mar 21, 2012 at 3:19 PM | Unregistered CommenterMikeN

By all means, when producing a method for computing the number of angels on the head of a pin, rush around in a white coat measuring all the pins you have access to. And similarly for phrenology, phlogiston calculations, astrology, and AGW climatology. Have your acolytes chant as you make the measurements. This is called "tuning the models."

Mar 21, 2012 at 5:23 PM | Unregistered Commenterjorgekafkazar

This is something we sceptics have always said - maybe not so directly as in this post below at WUWT a few weeks ago, but it is obvious to anyone with half a brain that Global Temperature has varied a lot in the past - long before we were introducing CO2. After all the Hockey Stick fraud was to divert attention away from the obvious conclusion.

http://wattsupwiththat.com/2012/02/22/omitted-variable-fraud-vast-evidence-for-solar-climate-driver-rates-one-oblique-sentence-in-ar5/#more-57212

Mar 21, 2012 at 7:51 PM | Registered Commenterretireddave

Tuning the model to fit the data is one thing, tuning the data to fit the model is another thing.

Mar 21, 2012 at 10:08 PM | Unregistered CommenterJohn A

ThinkingScientist wins the thread. Very good post.

Mark

Mar 21, 2012 at 10:58 PM | Unregistered CommenterMark T

Thanks Mark!

I am a little disappointed that there has been no response from Richard Betts in answer to any of my 7 questions. Perhaps he is busy and hasn't seen them yet.

Mar 21, 2012 at 11:13 PM | Unregistered CommenterThinkingScientist

Perhaps.

I think your post highlighted what those of us required to model real-world phenomona - and use such models for predictive purposes, inherently know: the results are immediately suspect at the first failed prediction. Not that you said as much directly, but justification requires repeatable success, and a single failure casts doubt on every success.

"Why did it work then and not now?" "How often will this failure occur relative to success?" "Is this failure more indicative of reality implying I got lucky when my flawed model caught it?" The latter is a sort of ghost in the machine question that plagues system designers incessantly. There is nothing quite like the error that occurs once per billion iterations, taking a week in model time, knowing that your system needs to pull off the same billion every second. Do you release it?

I am fortunate in that my modeling exercises tend toward simpler, much better understood, physical systems (generally RF signals for comm/radar). As a result, it is not difficult (rather, not impractical) to test reality against my model. "Tuning" is more of an exercise in tinkering within component tolerances, atmospheric assumptions, and variances expected from the algorithm itself based on all of the above. Then I look at errors in my implementation. Even with confirmation of everything, there are nuances that appear requiring complete re-evaluation of any, or all, of the above.

Mark

Mar 21, 2012 at 11:55 PM | Unregistered CommenterMark T

@Richard Betts

re your: "BTW GCMs are only tuned against the present-day state and not against the historical change - that would be too hard as it takes too long to keep repeating the simulations, and would also generate a circular argument in using the models for explaining past changes."

This is not meant with any sarcasm, simply curiosity, but I don't see how a GCM can be considered plausible if it can't be tested first on some appropriate wide range of historical data.

I'm only a simple-minded non-scientist, so I have to confine my attentions to what Joe-citizens like me, and our policy-makers, can try to assess about the various competing claims on our attention and support, our laws and tax dollars/euros etc. I don't see why the problem is one of "circularity" -- I would want to know GCMs have been tested/validated against whatever data we believe to be reliable before I would trust a model to predict the future....

I would have thought (looking at the matter from afar) that any GCM would have to be able to reproduce accurately the historical record as we know it to have even a chance of being a plausible contender for predicting the future. i.e., how can any GCM be said to predict if it has not been "validated" (don't know if that is the right word) against the existing data record?

I don't see how one can escape that issue no matter how difficult or expensive it may be to do simulations of past changes etc.

btw, thanks for all your contributions here, and don't feel a need to explain to a non-technical newbie like me, but maybe this comment can suggest an explanation that might satisfy the more scientifically minded posters here....

Mar 22, 2012 at 4:12 AM | Registered CommenterSkiphil

Climate models pass the Sagan test for pseudoscience

Pseudoscience is just the opposite [of science]. Hypotheses are often framed precisely so they are invulnerable to any experiment that offers a prospect of disproof, so even in principle they cannot be invalidated. Practitioners are defensive and wary. Skeptical scrutiny is opposed

Mar 22, 2012 at 6:36 AM | Unregistered CommenterJack Hughes

If climate models cannot reliably be used to predict average tree ring width (regional or global), how can they reliably predict average temperature?

Mar 22, 2012 at 6:41 AM | Registered Commentermatthu

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>