Buy

Books
Click images for more details

Twitter
Support

 

Recent comments
Recent posts
Currently discussing
Links

A few sites I've stumbled across recently....

Powered by Squarespace
« It was the best of times - Josh 312 | Main | For whom the bell Tols - Josh 311 »
Friday
Jan302015

On tuning climate models

I had an interesting exchange with Richard Betts and Lord Lucas earlier today on the subject of climate model tuning. Lord L had  put forward the idea that climate models are tuned to the twentieth century temperature history, something that was disputed by Richard, who said that they were only tuned to the current temperature state.

I think to some extent this we were talking at cross purposes, because there is tuning and there is "tuning". Our exchange prompted me to revisit Mauritzen et al, a 2012 paper that goes into a great deal of detail on how one particular climate model was tuned. To some extent it supports what Richard said:

To us, a global mean temperature in close absolute agreement with observations is of highest priority because it sets the stage for temperature-dependent processes to act. For this, we target the 1850-1880 observed global mean temperature of about 13.7◦C [Brohan et al., 2006]...

We tune the radiation balance with the main target to control the pre-industrial global mean temperature by balancing the [top of the atmosphere] TOA net longwave flux via the greenhouse effect and the TOA net shortwave flux via the albedo affect.

OK, they are targeting the start of the period rather than the end, but I think that still leaves Richard's point largely intact. However, Mauritzen et al also say this:

One of the few tests we can expose climate models to, is whether they are able to represent the observed temperature record from the dawn of industrialization until present. Models are surprisingly skillful in this respect [Raisanen, 2007], considering the large range in climate sensitivities among models - an ensemble behavior that has been attributed to a compensation with 20th century anthropogenic forcing [Kiehl, 2007]: Models that have a high climate sensitivity tend to have a weak total anthropogenic forcing, and vice-versa. A large part of the variability in inter-model spread in 20th century forcing was further found to originate in different aerosol forcings.

And, as they go on to explain, it is quite possible that a kind of pseudo-tuning - I will call it "tuning" - is going on through the choice of aerosol forcing history used (my emphasis):

It seems unlikely that the anti-correlation between forcing and sensitivity simply happened by chance. Rational explanations are that 1) either modelers somehow changed their climate sensitivities, 2) deliberately chose suitable forcings, or 3) that there exists an intrinsic compensation such that models with strong aerosol forcing also have a high climate sensitivity. Support for the latter is found in studies showing that parametric model tuning can influence the aerosol forcing [Lohmann and Ferrachat, 2010; Golaz et al., 2011]. Understanding this complex is well beyond our scope, but it seems appropriate to linger for a moment at the question of whether we deliberately changed our model to better agree with the 20th century temperature record.

They conclude that they did not, but effectively note that the models that find their way into the public domain are only those that, by luck, design or "tuning", match the 20th century temperature record.

In conclusion then I conclude that Lord Lucas's original point was in essence correct, so long as you conclude both tuning and "tuning". Richard's point was correct if you only include tuning.

 

PrintView Printer Friendly Version

Reader Comments (74)

Bish

We don't choose an aerosol forcing history - the radiative forcing from aerosols is something that is simulated by the models, not something that is imposed.

Jan 30, 2015 at 12:52 PM | Registered CommenterRichard Betts

You are right of course. I'll strikethrough the word aerosol.

Jan 30, 2015 at 1:03 PM | Registered CommenterBishop Hill

Am I right apart from that?

Jan 30, 2015 at 1:04 PM | Registered CommenterBishop Hill

Heh, it doesn't matter which meaning of tuning applies, the models are wrong in either case.
================

Jan 30, 2015 at 1:29 PM | Unregistered Commenterkim

Please, deliberated or not, by whatever mechanism, aerosols are the fudge. Dispute that.
=====================

Jan 30, 2015 at 1:32 PM | Unregistered Commenterkim

If climate models are tuned to the 20th century, how can their close match with observations over the second half of the 20th be used in attribution, as the IPCC does?

Jan 30, 2015 at 1:39 PM | Registered Commentershub

Forgive me if I paraphrase my comment from the Marotzkes Mischief thread ...

why anyone with the standard issue of brain cells should assume that past performance in a situation as chaotic as climate should be a reliable guide to the future ... beggars belief.
... and add another thought from Kevin Marshall on the same thread:
The "reality" to which the climate models are tuned to is largely composed of surface temperature data sets. For a number of countries there is evidence of adjustment biases that tune the empirical data to the models.
As the financial advisers are now forced by law to say: "Past performance is no guarantee of future performance". Just how long is it going to take for that message to penetrate?

Jan 30, 2015 at 1:41 PM | Registered CommenterMike Jackson

If they're tuned to the 20thC temperature history, no wonder their models fail so badly.

If they tuned them to the pre-adjusted numbers, they might stand a better chance!

We will recall the recent Hay paper on sea levels that believed sea level rise were much smaller during the 20thC, as the drivers of melting glaciers, SST's etc could not account for all of the rise originally presumed. They worked of course on the much adjusted version of 20thC temperatures.

If they had worked on the original data showing a much warmer climate up to the mid 20thC, fast melting glaciers then etc, they would have come to different conclusions.

https://notalotofpeopleknowthat.wordpress.com/2015/01/18/new-paper-on-sea-level-rise-adjusts-the-data-to-match-the-models/

Jan 30, 2015 at 2:03 PM | Unregistered CommenterPaul Homewood

Martin A @ 6:51 PM, Jan. 29, 2015 on the first leg of the Marotzke thread, near the bottom, has an excellent, pertinent, comment. I'll quote the pearl:

"In making climate models, past observations are used in all sorts of ways to "parameterize" the simple formulas that represent the parts that are not well understood and also to adjust in various ways the parts that are moderately well understood."
============================

Jan 30, 2015 at 2:06 PM | Unregistered Commenterkim

If the tuning is done if an earlier model, doesn't it still count as tuning?

For instance - when the BBC had a go at cloud computing back in 2006, they stopped their climate model experiment almost immediately because the models were running way too hot. They adjusted the aerosols and relaunched. If the knowledge gleaned from any study like that is used in later models, then there was tuning,Any model that falls by the wayside is a form of tuning.

Jan 30, 2015 at 2:10 PM | Unregistered CommenterTinyCO2

'the radiative forcing from aerosols is something that is simulated by the models'

Mt Betts are these models self aware or are in only simulating what they told how they are told to do it ?

The models are in fact just number crunchers , expensive ones true give they take 97 million pounds of computing power , to run but they remain unable to do what any 5 year old can do without any thought becasue they have no ability to do anything but what they are told to. And there in lies the issue , what do they get told and why are they told it in the first place . And that is before we get to how well the radiative forcing from aerosols is actual not theoretical known.

Jan 30, 2015 at 2:18 PM | Unregistered CommenterKnR

James Lovelock in the Guardian


on computer models

I remember when the Americans sent up a satellite to measure ozone and it started saying that a hole was developing over the South Pole. But the damn fool scientists were so mad on the models that they said the satellite must have a fault. We tend to now get carried away by our giant computer models. But they're not complete models.

They're based more or less entirely on geophysics. They don't take into account the climate of the oceans to any great extent, or the responses of the living stuff on the planet. So I don't see how they can accurately predict the climate.

on predicting temperatures


If you look back on climate history it sometimes took anything up to 1,000 years before a change in one of the variables kicked in and had an effect. And during those 1,000 years the temperature could have gone in the other direction to what you thought it should have done. What right have the scientists with their models to say that in 2100 the temperature will have risen by 5C?

The great climate science centres around the world are more than well aware how weak their science is. If you talk to them privately they're scared stiff of the fact that they don't really know what the clouds and the aerosols are doing. They could be absolutely running the show.

We haven't got the physics worked out yet. One of the chiefs once said to me that he agreed that they should include the biology in their models, but he said they hadn't got the physics right yet and it would be five years before they do. So why on earth are the politicians spending a fortune of our money when we can least afford it on doing things to prevent events 50 years from now? They've employed scientists to tell them what they want to hear.


on scientists

Sometimes their view might be quite right, but it might also be pure propaganda. This is wrong. They should ask the scientists, but the problem is scientists won't speak. If we had some really good scientists it wouldn't be a problem, but we've got so many dumbos who just can't say anything, or who are afraid to say anything. They're not free agents.

http://www.guardian.co.uk/environment/blog/2010/mar/29/james-lovelock

Jan 30, 2015 at 2:26 PM | Unregistered Commenteresmiff

I remember the original climate debates in the Guardian. The corporate side won by simply banning anyone who knew anything about computer models (who generally ridiculed the idea of modelling the climate).

Claiming computers can model the climate is one thing, using them to claim to be able to predict a dangerously hot planet and condemning millions to poverty, some to death is cruel, callous and frankly criminal.

Global warming. Probably the biggest banking scam in history. Jail the fraud deniers.

http://www.scrapthetrade.com/intro

Jan 30, 2015 at 2:33 PM | Unregistered Commenteresmiff

"If climate models are tuned to the 20th century"

WTF!!!

We must continue to clarify language(even among those of us that claim English as fluent language).

"TUNED"

If the planes, rail cars, ships and automobiles we board were "TUNED" as climate models were "TUNED" would not Climate 'science' computer modelers now be docked at The Hague?

Climate 'science' computer modelers(and CAGW enthusiasts) are not subject to any responsibility and appear to be immune from any actual punitive actions unlike online and telephone psychics, hair dressers and hot dog vendors who are held in higher regard and regulated by state.

I suggest that "TUNED" be clarified for the revisionist implementation that it appears to be given.

Jan 30, 2015 at 2:47 PM | Unregistered CommenterPaul in Sweden


They conclude that they did not, but effectively note that the models that find their way into the public domain are only those that, by luck, design or "tuning", match the 20th century temperature record.

I'm guessing about 97% are discarded?

Jan 30, 2015 at 3:01 PM | Unregistered CommenterSandyS

Models are surprisingly skillful in [representing the observed temperature record from the dawn of industrialization until present] [Raisanen, 2007], considering the large range in climate sensitivities among models ...

If we say that in the real world there is only one correct value for climate sensitivity, doesn't this prove that the models are tuned to reproduce the observed temperature record? "Here is a climate model with sensitivity X that reproduces the temperature history." "Here is a climate model with sensitivity Y that reproduces the temperature history."

Pick your climate sensitivity, we'll provide a model to match.

Jan 30, 2015 at 3:08 PM | Unregistered CommenterSpeed

Climate models should be based on physics and then validated. They should not be tuned, which implies they have got the physics wrong (which we know they have).

Jan 30, 2015 at 3:10 PM | Registered CommenterPhillip Bratby

Here is a challenge to the Met office's VERY biggest computer.


What was the average surface air temperature of the mid Atlantic in 1723 ? What for that matter was the average surface air temperature of the Sahara desert in 1723 ?


I wouldn't trust estimates for average global temperature in 1993, never mind 1723. Ask Anthony Watts for the reason.

Jan 30, 2015 at 3:15 PM | Unregistered Commenteresmiff

It's all much of a muchness. To a greater or lesser extent, they all use the past to model the past.

In most branches of science that, in itself, doesn't get you much credibility for predicting the future.

Jan 30, 2015 at 3:17 PM | Unregistered Commentermichael hart

There is a parallel in physics where everyone knows (knew) quantum electrodynamics was flawed, but on they went, claiming it is the most successful theory in the history of physics. There is an embarrassing, but hilarious lecture by Richard Feynman in New Zealand where he refuses to explain renormalisation because he (its creator) doesn't understand it himself.

LOL !

Freeman Dyson argued that these infinities are of a basic nature and cannot be eliminated by any formal mathematical procedures, such as the renormalization method.

Dirac's criticism was the most persistent. As late as 1975, he was saying

Most physicists are very satisfied with the situation. They say: 'Quantum electrodynamics is a good theory and we do not have to worry about it any more.' I must say that I am very dissatisfied with the situation, because this so-called 'good theory' does involve neglecting infinities which appear in its equations, neglecting them in an arbitrary way. This is just not sensible mathematics. Sensible mathematics involves neglecting a quantity when it is small – not neglecting it just because it is infinitely great and you do not want it!

Another important critic was Feynman. Despite his crucial role in the development of quantum electrodynamics, he wrote the following in 1985:

The shell game that we play ... is technically called 'renormalization'. But no matter how clever the word, it is still what I would call a dippy process! Having to resort to such hocus-pocus has prevented us from proving that the theory of quantum electrodynamics is mathematically self-consistent. It's surprising that the theory still hasn't been proved self-consistent one way or the other by now; I suspect that renormalization is not mathematically legitimate.


https://en.wikipedia.org/wiki/Renormalization#Attitudes_and_interpretation

Jan 30, 2015 at 3:34 PM | Unregistered Commenteresmiff

The parallel is quite close because (as I understand it) Feynman simply plugged the real world 'answers' into the equations.

Jan 30, 2015 at 3:39 PM | Unregistered Commenteresmiff

Help required please on definitions

1. Adjusting=homogenising=changing data to match theories/models

2. Tuning=changing models to match data

3. Changing theories to match common sense and any real data, is not comprehended in climate science.

I may not be right, but is this more accurate than Real Climate Science?

Jan 30, 2015 at 3:42 PM | Unregistered CommenterGolf Charlie

Climate models are merde. We know it, the UKMO knows it and all financial institutions know it but there is tooooo much money resting on them being ACCEPTED as infallible.

Jan 30, 2015 at 3:49 PM | Unregistered CommenterStephen Richards

The graph is from the NRC report, and is based on simulations with the U. of Victoria climate/carbon model tuned to yield the mid-range IPCC climate sensitivity. - See more at: http://www.realclimate.org/index.php/archives/2011/11/keystone-xl-game-over/langswitch_lang/en/#sthash.lIT4QWq8.dpuf

Jan 30, 2015 at 3:53 PM | Unregistered CommenterMikeN

Richard Betts, could you explain your 'aerosols are simulated'?
In my experience the aerosol factor was an input to the model, in some cases directly at run time.

Jan 30, 2015 at 3:55 PM | Unregistered CommenterMikeN

I'm reposting here my comment from the tail end of the Marotzke's mischief thread.
__________________________________________________________________________

Martin A
I have always wondered whether that last sentence that you highlighted** is a massive leap of faith or an equally massive non sequitur
.....
Jan 29, 2015 at 2:41 PM Mike Jackson

** Computer models are the only reliable
way to predict changes in climate. Their
reliability is tested by seeing if they are able
to reproduce the past climate which gives
scientists confidence that they can also
predict the future.

UK Met Office publication: Warming Climate change the Facts

Well, I imagine they do believe it - it has often been repeated by Met Office staff, including here on BH.

For decades, engineers have recognised the fallacy of "testing on the training data". If you make a system by using samples of statistical data to "tune" or "train" it (essentially you optimise the parameters of the system so that it produces the required behaviour) then, if you use the same data to test the system, you will get hopelessly optimistic estimates of its performance. This applies even if the test data is not identical to the training data but does have statistical dependency on the training data, even if the dependency is quite slight.

In making climate models, past observations are used in all sorts of ways to "parameterise" the simple formulas that represent the parts that are not well understood and also to adjust in various ways the parts that are moderately well understood. Information from observed data creeps in all over the place, including affecting the assumptions made and then incorporated into the models in ways that the creators of the models may not even be conscious of.

No audit trail is kept of the way past data is used in creating climate models . It would be impossible to take one of today's climate models and eliminate from it any all information that had been derived from observations of atmosphere and climate after, say, 1970 (or any other specific date, up to the present day). So any testing of such models against past climate invariably involves the "testing on the training data" fallacy.

What to make of the Met Office's comment, clearly intended to be take seriously? I think that, to be charitable, notwithstanding the cost of their supercomputers, and the hundreds of qualified staff involved, they don't know their arse from their elbow when it comes to using computer models to predict future climate. Or at least in recognising and admitting their inability to validate their models.
________________________________________________________________________________________
Or in recognising the futility of unvalidated models.

As I have said here before, although I am far from the first person to point it out, an unvalidated model, if you believe and act on its results, is worse than having no model.

Jan 30, 2015 at 4:10 PM | Registered CommenterMartin A

When I first saw the term "anomaly" used for the temperature record I wondered how the "normal" from which it departs was chosen.

Seems to me that the discovery of chaos theory has been ignored by climatology; that it's practitioners are knowingly or unknowingly reverting to the deterministic view of nature which was ditched at the turn of the 19th Century. Since careers depend on a mathematical fallacy they are unlikely to admit that they're p*ss*ng in the wind.

Jan 30, 2015 at 4:19 PM | Unregistered CommenterBrent Hargreaves

Jan 30, 2015 at 3:34 PM | Unregistered Commenteresmiff

I feel there are probably parallels between Feynman's renormalisation and Heaviside's operational calculus which he developed in the 19th century. At the time, he was derided by Cambridge mathematicians because of the lack of rigour of his work and his readiness to do things like use expansions in power series of d/dt - mathematical nonsense according to his learned critics.

Heaviside's reply was that he regarded maths as an experimental science - he could verify that his methods gave the correct answers but he could not explain why they worked. Then, later, along came Bromwich and others and showed that Heaviside's methods could all be made rigorous via the Laplace transform.

I think that some day, somebody will eliminate the lack of rigour from QED. After all nobody (so far as I know) disputes its answers.

Oliver Heaviside is my hero - so much in today's electrical engineering originated with this 19th century lonely genius.

Jan 30, 2015 at 4:25 PM | Registered CommenterMartin A

Of course there is tuning or calibrating, hot co2 models are compensated with huge aerosol cooling to match the moderate observed warming so far.

Jan 30, 2015 at 4:31 PM | Unregistered CommenterHans Erren

Speed, You are exactly right.
Only one climate sensitivity can be right (at most).
But all manage to hit the 20th century climate change.

So either:
A Climate sensitivity has no effect on the models.
B For one century only climate sensitivity doesn't matter and that is the one we have measured.
C Any model that doesn't match the 20th century climate is discarded.

A is not how the models are designed. Wrong.
B is an incredible fluke that keeps being replicated by model after model. Near certainly wrong.
C is tuning.

Jan 30, 2015 at 4:43 PM | Unregistered CommenterMCourtney

And they managed all this, ending up confident in their results, when not all the chaotic climate processes are known, understood, or accounted for.

Anyone know how all the clouds work, and have the repeatable observations and measurement evidence to back-up their theory?

Or can explain how and why our atmosphere 'breaths' (inflates and deflates) in lockstep with the sun's activity as noted by NASA and others?

Or explain what the main governing processes that determine how the jet stream works, and can also show (by observation and measuement) how and why it moves the way it does?

Thought not.

Jan 30, 2015 at 4:48 PM | Unregistered Commentertom0mason

There may be a language problem here. Models have to be parameterized for processes they cannot simulate. The simplest example is tropical comvection cells (thunderstorms) because theynoccur on scales far smaller than the smallest gric cell (owing to intractable computational limitations). There are wide choices of combinations of parameters that might 'work'. What was done for CMIP5 is for each modeling group/ model to select a set that enabled their model to best matched the period from about 1975 to about 2005. Why? Because that is the hindcast period that was expressly required in the 'experimental designl' for the 'near term integrations' aka 'decadal prediction experiments'. This parameterization is what I think Lord Lucas meant.

Betts is referring to the long term (century time scale) 'experiments'. These were as he correctly said initialized with quasi equilibrium 'preindustrial' TOA forcing estimates. But the models had already been parameterized to best fit the decadal hindcast experimental requirements.
For a complete explanation, see BAMS-D-11-00094, Taylor, Stouffer, and Meehl, An overview of CMIP5 and the experimental design. 2012. Open access. Also discussed in essay Unsettling Science, which illustrates Akasofu's compelling hypothesis for why this parameterization 'tuning' period is now causing CMIP5 to run hot. And essay Models all the way Down illustrates why parameteriztion is necessary, and even some key parameters that some models are just missing.

Jan 30, 2015 at 5:06 PM | Unregistered CommenterRud Istvan

Martin A

Thanks for that, very interesting comparison. It really depends what your what one's definition of 'works' is, if some of the difficult answers are plugged in. Branches of modern science are fantastically complex and specialised these days.

I am too old / stupid / lazy to do serious maths nowadays anyway, so I am not capable of making a valid judgement.

Jan 30, 2015 at 7:06 PM | Unregistered Commenteresmiff

We are told that aspects of the climate are inherently unpredictable, and that's why we get 'projections' instead of 'forecasts'.

But the averaged model runs 'hindcast' practically every wiggle of this supposedly unpredictable temperature history for the past ~120 years, and then suddenly become 'projections' when they run past the end of the available 'data' and start to diverge rapidly from observations.

Also AFAIK much of the historic 'data' (volcanic aerosols etc) is estimated to boot.

Hence the accusations of 'tuning'.

Jan 30, 2015 at 8:09 PM | Unregistered CommenterJake Haye

C Any model that doesn't match the 20th century climate is discarded.
C is tuning.

No it's not. Tuning is when they have some dials to turn that tunes a model. Discarding model runs is not tuning. Plus you would want them to do that anyways.

Now if they are changing their inputs and code to produce the models that matched 20th century climate or some other desired outcome, then you are tuning.

Jan 30, 2015 at 8:20 PM | Unregistered CommenterMikeN

In 2005 there were three papers in Science (Wild; Pinker;Wielicki) that showed that sulphate pollution (the main anthropogenic aerosol) was too localised to account for the 'global dimming' phenomenon and the lack of global warming from 1945-1975. I checked, by visiting, both with Hadley in 2008 and NCAR in 2010 and all teams still used parameters based on anthropogenic sulphur masking the effect of increased CO2. No one had or could have initialised their model for natural cycles because no one understood them and they still don't, but to most oceanographers the hiatus was rather obviously caused by negative phases of several ocean oscillations (like the Pacfic Decadal and Arctic Oscillation). The temperature effects of these oscillations are almost certainly driven by cloud and aerosol responses to oscillating patterns of sea level pressure and wind directions - also not capable of being modelled. I think Gerry Meehl made some attempts when recently looking at projecting a new Maunder-type minimum based on both lower solar activity in the next cycle and then a three decade hiatus. He incorporated a shifting jetstream patterns (Hadley also had a go using a simpler model without jetstream changes). Both concluded that rising CO2 effects would eventually trump the solar slump - but Meehl's return of a warming trend took until 2060! Hadley's team used a sensitivity (lamda) factor of 0.88 and warming paused but little, whereas Meehl used a pessimistic 0.3 or 0.4.

If the models that hindcast the 1945-1975 hiatus used aerosol factors that were false, it can only mean that the real CO2 sensitivity is low - at or below 0.4. But who knows why? And why is it assumed that the sensitivity is fixed? Surely it will depend upon the aerosol status of the atmosphere, which varies in cycles?

And as Meehl and his team have shown, future cooling could be on the cards - all depends upon a solar status that is not predictable!

In the 1970s and 1980s I led teams that replicated modelling studies (of nuclear fall-out) using 'alternative' parameters and some big computers - but in the end, our back-of-the-envelope calculations produced much the same answers. Of course, nobody would take the short route seriously - so we had to acquire the models. It is the same with climate - it is relatively easy to make allowance for the ocean oscillations, less easy to work with solar far-UV and jetstream effects, but the answers from the envelope are likely to be as good as Hadley's super computer. Nevertheless, I would like to see the following 'experiment': programme the computer with the lowest sensitivity, the lowest solar factors for the next fifty years and some fudge for the oceanic oscillations - all perfectly scientific but with unknown probability, and I bet that global cooling would be the result. Its called a 'what if?' scenario.

But who will allow computer time for such an exercise? Imagine the furore if the MetOffice said 'global cooling over the next three decades is possible - we can't rule it out'! 'But then we expect warming to return'.

The problem is that this is a REAL possibility - and if it happens, the reputation of not just Hadley is at stake, but science itself and especially environmental science.

Jan 30, 2015 at 8:20 PM | Unregistered CommenterPeter Taylor

Jack Haye, with all the money spent, climate science is now better at telling us about the climate 100 years ago.

Weather forecasts are definitely better about the weather in 5 days time, but the BBC does very well at spending tme telling us about yesterdays weather, if they haven't got much to say about tomorrow's

Jan 30, 2015 at 8:24 PM | Unregistered CommenterGolf Charlie

Peter Taylor--

Did Meehl and/or Hadley publish their efforts?

What are the units for your sensitivity? Apparently not oC per doubling of CO2, since those sensitivities range from 1.6 (Nic Lewis) to 3.0 (AR4).

Jan 30, 2015 at 8:54 PM | Unregistered CommenterLance Wallace

Models "tuned" to historic data. Historic data "tuned" to models. Policy driven science in action. Consensus all round and round and....

Jan 30, 2015 at 9:00 PM | Unregistered Commenterbetapug

Scientific modelers have suffered from the "tuning problem" for over two thousand years. A good model must be flexible to deal with new data and new insights, but they must be extremely careful. Is the tuning simply creating "epicycles" or a more realistic model?

When the science of Aristotle ruled the day with a geocentric view of the solar system, astronomical modelers believed there were 2 invisible shells that rotated around the earth. One shell for the fixed stars, and another for the wandering stars, the "planetes". However that view had to be tuned when observers reported that some planets would go in reverse or retrograde motion.

From the heliocentric mode's perspectivel, we can easily explain such motion as an illusion created by a faster earth passing slower planets in the outer orbits. However modelers of bygone days, holding fast to the geocentric model created "epicycles", rotating shells on the planetary shell. Every observation that contradicted the geocentric model was dealt with by simply tuning a bad model with more imaginary epicycles. This tuning was so masterful, that modern planetariums now use the epicycle schemes to design the gear patterns that move the lights across the planetarium's ceiling, creating the celestial illusion just as we now observe.

As long as modelers hold CO2 warming to be sacred, their tuning will more than likely be adding epicycles, instead of making a more realistic climate model. One need only recall the flurry of models showing how warming cause more Antarctic sea ice, more colder winters, more snow, and the 18 year pause. Epicycles are the modelers first line of defense to protect a failing dogma, and eventually the plethora of epicycles will become so thick, Occam's razor will not be able to cut through their illusions.

Jan 30, 2015 at 9:06 PM | Unregistered CommenterJIm Steele

Peter Taylor

You are operating within the framework of the leading edge of climate science as it is seen from your perspective . Nothing you say will ever be held to account in the future. However, it's clear that many here simply do not believe these models in any way represent the reality of the climate system.

I went to a meeting 10 years ago. On a break I asked a man I knew, the head of computing assessment in Scotland, 'can your students do these assessments' ? No, he said, 'don't be daft', we just cheat'. In the meeting these idiots wanted to make it even more difficult. In fact, probably too difficult for themselves, never mind students.

People's minds shield them from the contradictions and nonsense in their lives to allow them to function and protect their egos.

Jan 30, 2015 at 9:15 PM | Unregistered Commenteresmiff

Actually, that meeting was 20 years ago and the disaster was caused by the government pushing all Scottish post school institutions into teaching at a (higher) level many couldn't attract students of sufficient quality for.

That was to make the government's statistics look good.

Jan 30, 2015 at 9:47 PM | Unregistered Commenteresmiff

Dr Betts, you wrote
"We don't choose an aerosol forcing history - the radiative forcing from aerosols is something that is simulated by the models, not something that is imposed."
For simulated I understand you to mean calculated or produced, is this correct?
You calculate the aerosol forcing necessary for something. What is that something?

MikeN
"Discarding model runs is not tuning."
You are being pedantic. It is like using a shotgun in an olympic rifle competition, and hiding every hole outside the bullseye.

Jan 30, 2015 at 9:51 PM | Unregistered Commenterghl

RB:
We don't choose an aerosol forcing history - the radiative forcing from aerosols is something that is simulated by the models, not something that is imposed.

Yes, the forcing is simulated, but the actual aerosol quantity, type, distribution are guessed, the choice aerosol radiative forcing scheme is also arbitrary and very uncertain. We have problems now to get correct aerosol values for meteorological models (not climatic), and that has a big impact on solar radiation (much more than the radiative forcing of CO2), yet, we have data from modis, and other satellites, profilers, ground stations, etc. I'd like to know how to estimate the aerosol distribution 100 yeas ago.

G Paltridge:
But, because of the uncertainties in aerosol contribution to the direct and indirect forcings, it is not possible to attribute precisely what fraction of the observed climate change is due only to CO2, based on observational data alone.

See also: https://www.arm.gov/publications/proceedings/conf15/extended_abs/liu_l.pdf etc

Jan 30, 2015 at 9:53 PM | Registered CommenterPatagon

"I'll strikethrough the word aerosol."

Any particular aerosols in mind, Bish..? :-)

Jan 30, 2015 at 9:54 PM | Registered Commenterjamesp

If climate models are to predict future temperature then they should not be tuned to past temperature, they should be based on a set of initial conditions and external variables like, volcano aerosols, solar variation etc. The mathematical physics/chemistry/biology embedded in the equations should take care of everything else.

And as they are chaotic there is no chance of predicting what will happen.

Jan 30, 2015 at 9:55 PM | Unregistered Commenterson of mulder

JIm Steele, I agree.

Jan 30, 2015 at 10:00 PM | Registered CommenterPatagon

Recent model improvements cause changes in the energy balance one order of magnitude higher than the anthropogenic radiative forcing. Models can fit temperature before and after the improvements only thanks to tuning:

http://bishophill.squarespace.com/discussion/post/1915791

Jan 30, 2015 at 10:13 PM | Registered CommenterPatagon

Climate models are the same as bullsh*t.

There is nothing wrong with bullsh*t.
You can speak bullsh*t.
You can write bullsh*t.
You can listen to bullsh*t.

BUT, when you start believing bullsh*t, you are in big trouble.

Jan 30, 2015 at 10:14 PM | Unregistered Commentertoorightmate

I built an IT system for a big Pharma once upon a time. When I had tested it, and when the users had tested it they got the regulatory testers in.
They told me that they didn't want any more Thalidomides, so I had to produce a test plan, a test script and they would produce test results. When I pointed out that it was a pointless exercise, because as I produced the test script, I would not find bugs and fix them, because I had already been down that route, they went apoplectic. I nearly got the boot.
So I produced the script, with screen shots, 'type this in here..then this should happen'
it simply could not fail. but they honestly believed they were helping prevent a major disaster.

My script could only ever contain what I knew and what I understood. And that is the problem with tuning

Jan 30, 2015 at 10:47 PM | Unregistered CommenterEternalOptimist

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>