Buy

Books
Click images for more details

Twitter
Support

 

Recent comments
Recent posts
Currently discussing
Links

A few sites I've stumbled across recently....

Powered by Squarespace
« Debent | Main | Scotland's green energy policy in the balance »
Saturday
Jan122013

Lewis on Schmidt on climate sensitivity

This is a guest posting by Nic Lewis. Nic has cross posted this to the comments at RC, with the normal style of response from Schmidt.

Gavin Schmidt

I am glad to see that my input into the Wall Street Journal op-ed pages has prompted a piece on climate sensitivity at RealClimate. I think that some comment on my energy balance based climate sensitivity estimate of 1.6–1.7°C (details at http://www.webcitation.org/6DNLRIeJH), which underpinned Matt Ridley's WSJ op-ed, would have been relevant and of interest.

You refer to the recent papers examining the transient constraint, and say "The most thorough is Aldrin et al (2012). … Aldrin et al produce a number of (explicitly Bayesian) estimates, their ‘main’ one with a range of 1.2°C to 3.5°C (mean 2.0°C) which assumes exactly zero indirect aerosol effects, and possibly a more realistic sensitivity test including a small Aerosol Indirect Effect of 1.2–4.8°C (mean 2.5°C)."

The mean is not a good central estimate for a parameter like climate sensitivity with a highly skewed distribution. The median or mode (most likely value) provide more appropriate estimates. Aldrin's main results mode for sensitivity is between 1.5 and 1.6°C; the median is about halfway between the mode and the mean.

I agree with you that Aldrin is the most thorough study, although its use of a uniform prior distribution for climate sensitivity will have pushed up the mean, mainly by making the upper tail of its estimate worse constrained than if an objective Bayesian method with a noninformative prior had been used.

It is not true that Aldrin assumes zero indirect aerosol effects. Table 1 and Figure 15 (2nd panel) of the Supplementary Material show that a wide prior extending from -0.3 to -1.8 W/m2 (corresponding to the AR4 estimated range) was used for indirect aerosol forcing. The (posterior) mean estimated by the study was circa -0.3 W/m2 for indirect aerosol forcing and -0.4 W/m2 for direct. The total of -0.7 W/m2 is the same as the best observational (satellite) total aerosol adjusted forcing estimate given in the leaked Second Order Draft of AR5 WG1, which includes cloud lifetime (2nd indirect) and other effects.

When Aldrin adds a fixed cloud lifetime effect of -0.25 W/m2 forcing on top of his variable parameter direct and (1st) indirect aerosol forcing, the mode of the sensitivity PDF increases from 1.6 to 1.8. The mean and the top of the range goes up a lot (to 2.5°C and 4.8°C, as you say) because the tail of the distribution becomes much fatter - a reflection of the distorting effect of using a uniform prior for ECS. But, given the revised aerosol forcing estimates given in the AR5 WG1 SOD, there is no justification at all for increasing the prior for aerosol indirect forcing prior by adding either -0.25 or -0.5 W/m2. On the contrary, it should be reduced, by adding something like +0.5 W/m2, to be consistent with the lower AR5 estimates.

It is rather surprising that adding cloud lifetime effect forcing makes any difference, insofar as Aldrin is estimating indirect and direct aerosol forcings as part of his Bayesian procedure. The reason is probably, because the normal/lognormal priors he is using for direct and indirect aerosol forcing aren't wide enough for the posterior mean fully to reflect what the model-observational data comparison is implying. When extra forcing of -0.25 or -0.5 W/m2 is added his prior mean total aerosol forcing is very substantially more negative than -0.7 W/m2 (the posterior mean without the extra indirect forcing). That results in the data maximum likelihoods for direct and indirect aerosol forcing being in the upper tails of the priors, biasing the aerosol forcing estimation to more negative values (and hence biasing ECS estimation to a higher value).

Ring et al. (2012) is another recent climate sensitivity study based on instrumental data. Using the current version, HadCRUT4, of the surface temperature dataset used in a predecessor study, it obtains central estimates for total aerosol forcing and climate sensitivity of respectively -0.5 W/m2 and 1.6 °C. This is a 0.9°C reduction from the sensitivity of 2.5°C estimated  in that predecessor study, which used the same climate model.  The reduction resulted from correcting a bug found in the climate model computer code.  (Somewhat lower and higher estimates of aerosol forcing and sensitivity are found using other, arguably less reliable, temperature datasets.)

PrintView Printer Friendly Version

Reader Comments (87)

Dear Nic @ Jan 12, 2013 at 10:04 PM:

You write,

The real problem is that very few people involved in climate science have any real understanding of objective Bayesian methods, which require the use of a noninformative prior. Consequently, they (and readers of their papers) can have little understanding of the extent to which the priors they have chosen bias their estimation of climate system parameters. I am doing what I can to to help remedy this, along with one or two others. Unfortunately, one the very few published climate science papers about noninformative priors, which relates to estimating climate sensitivity, is badly wrong.

I am not able to evaluate all of this but if true then there would surely be many experts from outside of climate science who would share the same view. Is this the case? If so, why are they all silent?

Jan 14, 2013 at 3:45 AM | Unregistered CommenterAlex Harvey

When reference is made to simple measures like mode and median and fat-tailed distributions, I try to recall papers I have read where either the native distribution has been transformed mathematically into a shape more amenable to classical analysis; or where the data have been split into groups that have similar frequency distributions of their individual parts.
Although I've not seen it explicitly stated, there seems to be a tendency to ignore the large amount of work that has gone into the recognition, transformation and analysis of data sets with different distributions and the causes of those distributions. It is more common to read the low-level approach of "This is eyeballs close enough to bell shaped and so we'll use normal statistics."

There is a lovely, exceptional example from Willis Eschenbach, his fig 2 at http://wattsupwiththat.com/2012/02/09/jason-and-the-argo-notes/

One should proceed cautiously with statistical analysis of his bounded ?platykurtic distribution as with all others that are not mathematically evaluated as suitable for standard statistics.

Note that global surface temperatures are not intuitively suitable for unmodified classical statistics. The above fig 2 provides an example why.

Jan 14, 2013 at 3:58 AM | Unregistered CommenterGeoff Sherington

Nic,
You wrote earlier concerning the short-term portion of the Hansen's climate response function, "The 50% figure reflects Hansen's assumption of a high value for ECS (3 C). If ECS is fairly low (1.5 C, say), then (given the modest rate of ocean heat uptake implied by observations) much more of the equilibrium response occurs within a century - more like 85%, with over 80% occurring within 50y."

Looking at the AR4 models, most of the models have a ratio in the 50-60% region. The three lowest-ECS models (ECS <= 2.3) have ratios of 76%, 62% and 52% respectively.

If the climate responds so slowly that the equilibrium is never reached, does it really matter what the millennial response is? Shouldn't we focus on the short- to mid-term response only?

Jan 14, 2013 at 6:02 AM | Registered CommenterHaroldW

For a simple introduction to Bayesian concepts I second the recommendation of The theory that would not die. I took a look at Sivia and I think most people here would find it tough going; I suspect the other books Nic suggested will be even more so. I have just bought a copy of Doing Bayesian Data Analysis: A Tutorial with R and BUGS and so far this looks pretty good.

Jan 14, 2013 at 10:33 AM | Registered CommenterJonathan Jones

"I took a look at Sivia and I think most people here would find it tough going; I suspect the other books Nic suggested will be even more so."

I think Jonathan is right, save for people with a mathematics/ mathematical physics background. I agree that The theory that would not die is a good introduction to Bayesian analysis for most people, although from what I recall it does not say much about objective Bayesian methods using noninformative priors.

Jan 14, 2013 at 2:19 PM | Unregistered CommenterNic Lewis

One of best introductory texts on statistics that I have come across is Wonnacott and Wonnacott's 'Introductory Statistics'. It has a chapter on Bayesian inference, and there, as elsewhere in the book, topics are introduced by means of simple but motivational examples. But as the authors point out, their book 'is not a novel, and it cannot be read that way. Whenever you come to a numbered example in the text, try first to answer it yourself. Only after you have given it hard thought and, we hope, have solved it, should you consult the solution we provide.'. I think this is great advice. Most of us first encounter statistics before we have encountered problems that matter to us that statistics can help with. If the reader can pretend that the presented examples are desperately important to them for an hour or so each, with a pencil and paper or a spreadsheet to hand, and only later turn to the solutions or the theory, then things could become dramatically clearer.

Jan 14, 2013 at 3:23 PM | Unregistered CommenterJohn Shade

Alex Harvey
Re "The real problem is that very few people involved in climate science have any real understanding of objective Bayesian methods, which require the use of a noninformative prior":
"I am not able to evaluate all of this but if true then there would surely be many experts from outside of climate science who would share the same view. Is this the case? If so, why are they all silent?"

Good question. There actually seem to be a relatively limited number of experts applying objective Bayesian methods in other scientific fields, partly because only in a few fields is the data sufficiently poor for the prior used to have a major effect. I have thought of trying to get one or two such experts to look seriously at the use of such methods in climate science, but they would have to invest a fair amount of time in doing so, which probably explains why none have done so to date. Objective Bayesian methods are however used in geophysics fields like seismology.

Jan 14, 2013 at 6:20 PM | Unregistered CommenterNic Lewis

Nic, to me (a layman) the phrase "objective Bayesian" appears to be self-contradictory (the term Bayesian implying at least an element of subjectivity, no?). So perhaps you can help me grasp how there can be fields of analysis for which this need not necessarily be true. (I'm hoping for an example that will enable me to say "Aha, of course, I hadn't thought of that). Many thanks if you can make it simple, simon.

Jan 14, 2013 at 7:23 PM | Unregistered Commentersimon abingdon

Perhaps someone could enlighten me; I still can not understand why so much effort goes into analysing data in an attempt to prove something that the data itself does not clearly show? At the end of the analysis the data is/are still the data.
In the argument about CAGW theories and statistical analyses will change nothing but empirical evidence could change everything.

Jan 14, 2013 at 8:12 PM | Registered CommenterDung

The simplest calculation linking increased temperature with increased CO2 from 1880 to date gives 1,78C per doubling.
This ignores lag and any other factors, so is likely to be a minimum figure.

Jan 15, 2013 at 12:21 AM | Unregistered CommenterEntropic Man

Dung

On the other hand, it is important to tell the adversaries that their calculations are wrong, nonsensical and lacking in any scientific foundation. otherwise, why else would Prof Mann look up from his elementary textbook?

Jan 15, 2013 at 12:33 AM | Unregistered Commenterdiogenes

Entropic Man -
Please do continue, show your work.

Jan 15, 2013 at 1:30 AM | Registered CommenterHaroldW

There is nothing on noninformative priors in The theory that would not die beyond a brief historical discussion of the subjectivist/objectivist controversies. In general it is not discussed in any simple texts on Bayesian analysis that I have looked at, and I suspect most users of the technique are largely unfamiliar with the issue.

I'm not sure that it's sensible to try to understand the issue without a reasonable understanding of the basics. That said, I'm going to have a go (Nic, please jump on me as/when I go wrong!).

Suppose one has a model, the details of which are specified by some parameters. The purpose of Bayesian analysis is to find a probability distribution for these parameters given (1) some data which allows these parameters to be partially determined, and (2) some prior views about this probability distribution. In essence the new ("posterior") probability distribution given the data is the old ("prior") probability distribution multiplied by the probability of observing the data for each set of parameter values in the distribution and then renormalised so that the probability distribution sums to one (this renormalisation is achieved by the term in the denominator in Bayes' theorem, but if you are evaluating the whole distribution rather than a single term it can be simpler just to renormalise).

So the posterior probability distribution depends on both the data and the prior. For simplicity one can distinguish three broad cases:

(1) There is lots of high quality data available. In this case the the data term will overwhelm any plausible prior, and it hardly matters what we do.

(2) The data is poor but we have significant prior knowledge about the probability distribution. In this case we can set up a prior reflecting this prior knowledge and Bayes' theorem allows us to update our distribution as data trickles in. (This is the case discussed in detail in The theory that would not die.)

(3) The data is poor and we have little prior knowledge about the probability distribution. This is the worrying case that makes people panic. Because the data is poor the choice of prior matters, but there seems to be little basis for choosing this prior. Subjectivists retreat into the dictum that probability is about states of belief, not about the world, and so claim this is not a problem. Objectivists argue that there is an objective way of picking an uninformative prior, and that this is the objectively right thing to do.

The naively obvious way to pick an uninformative prior is to assign equal prior probability to all possibilities, using the principle of indifference. This works reasonably well for discrete probability distributions (where the parameters take one of a finite number of values) but becomes problematic for continuous probability distributions. [An aside: this problem can not be sidestepped by the popular modern practice of discretising the continuous probability distribution, as the problem simply reappears as the question of how you discretise it.] There are two broad classes of problem that arise from this approach.

Firstly while some continuous variables can vary only within a fine range (e.g., for a biased coin the probability of throwing a head can vary arbitrarily but must be bounded between zero and one inclusive) many continuous variables can take values from an infinite range. In this case we cannot assign equal probability to all values as the total probability would then of necessity be infinite. Such priors are known as improper priors because they cannot be normalised, although in practice one can often get away with using them as the posterior probability can be well behaved even when the prior is not. A quick and dirty solution is just to truncate the prior at some large value which you know the data is going to effectively rule out; if the truncation point is chosen far enough out then the exact choice is irrelevant.

Secondly, and more seriously, the application of the principle of indifference to continuous variables is notoriously difficult, as it is possible to parameterise the problem in several different ways and a prior which is uniform in one parameterisation will not be uniform in another. (See the Bertrand paradox for a classic example of this.) Thus a uniform prior cannot be uninformative as it includes prior information about the preferred parameterisation.

Jeffreys attempted to solve this by arguing for priors which were invariant under certain variable transformations, leading to the Jeffreys prior. There is very considerable debate concerning to what extent this solves the problem. The main alternative approach, due to Jaynes, is to use the Principle of Maximum Entropy to derive a prior. But at this point I am getting out of my depth and hope that Nic will take over!

Jan 15, 2013 at 10:49 AM | Registered CommenterJonathan Jones

Harold W

1880 global average temperature anomaly -0.280
1880 carbon dioxide concentration 280ppm

2011 global average temperature anomaly 0.400
2011 carbon dioxide concentration 0.392

Doubling of CO2 concentration from 1880 will be from 280ppm to 560ppm, an increase of 280ppm.

Between 1880 and 2011 temperature increased by 0.400 + 0.280 = 0.680C
CO2 increased by 392-280 = 112ppm.

Ckmate sensitivity per doubling = Temperature rise (1880 to 2011) * CO2 increase to doubling/ CO2 increase to 2011

0.680 * 0.280/0.112 = 1.70C

This ignores lag and all the other variables under discussion, deriving what is probably a minimum figure entirely from observed data.
The calculation uses a simplifying assumption that the change is linear. In theory, the increase should be proportional to the natural logarithm ( log. base e) of the concentration change. I have tried the logarithmic form of this calculation in the past and found that it gives a slightly higher sensitivity to date and a slightly lower sensitivity to doubling. Since the difference was only in the third significant figure, I did not feel that the extra effort was worthwhile.

Jan 15, 2013 at 5:37 PM | Unregistered CommenterEntropic Man

Jonathan Jones, thank you for drawing our attention to the obvious but nonetheless startling Bertrand paradox which enables us to infer that:

"A uniform prior cannot be uninformative as it includes prior information about the preferred parameterisation."

Now while I readily understand this inference that "a uniform prior cannot be uninformative", the closeness of the words "uniform" and "uninform" suggest to me a minefield of possible future misunderstanding.

So for now I shall content myself with the well-known injunction "light the blue touch paper and retire immediately" until this discussion has played out to a level of mutual understanding amongst the various protagonists more comfortably understandable by yours truly, (simple) simon abingdon.

But naturally I remain agog .

Jan 15, 2013 at 8:17 PM | Unregistered Commentersimon abingdon

meanwhile Gavin has pulled out a new paper by climate capo Trenberth that pushes sensitivity up to 4 degrees. They always have something up their sleeves.

Jan 15, 2013 at 9:54 PM | Unregistered Commenterdiogenes

and I asked about uninformed priors, using the annan precedent....no response so far after 3 days. No doubt they are going to bury it in the long grass of the third post where they destroy Nic Lewis for doing just what they said should be done.

Jan 16, 2013 at 12:16 AM | Unregistered Commenterdiogenes

Entropic Man -
Thanks for your expansion. A few short comments:

1. You underestimate the log in this case. log2(392/280) = 0.49 doublings, so the naive estimate would be 0.70 K / (0.49 doubling) = 1.4 K/doubling.
2. A tacit assumption is that other effects on temperature average out over the century+ interval, or at least net to only a small figure. This seems a reasonable assumption for some causes such as the oceanic phenomena, but as you're obviously aware, there are numerous forcings besides CO2. Positive ones such as methane, ozone, black carbon; negative ones such as aerosols. The positive ones add up to about the same forcing as CO2; the negative ones are likely less but there are those (notably Hansen) who believe it to offset half the greenhouse gas warming. Hence the naive estimate above could be off by a factor of two either way. In my opinion, the non-CO2 forcings are net positive, hence I think the value is less than that above.
3. The above estimate would be more comparable to Transient Climate Response (TCR) than to (the larger) Equilibrium Climate Sensitivity. The AR4 models show TCR between 1.2 and 2.6 K/doubling. The upper end of that range would seem to be less plausible based only on this simple calculation. [The fact that the multi-model mean has run "too hot" compared to observation is a further bit of evidence.]

4. My conclusion is that the approach can demonstrate an order-of-magnitude number, and argues against the plausibility of the larger values of TCR/ECS, but can't definitively distinguish between mild vs. more severe sensitivity.

Jan 16, 2013 at 12:42 PM | Registered CommenterHaroldW

Harold W

I was only aiming for an approximate figure. I've used this back-of the- envelope technique in other contexts, to get an idea of the scale of an effect. Usually I can get within a factor of two, certainly within an order of magnitude.
The advantage of working from the temperatures direct is that it entrains all the forcing effects. They may not all be directly CO2 related, but factors such as black carbon and aerosols are likely to increase in proportion as they are all proxies for industrial activity. The real problem is oceanic lag. Surface temperatures run 2 months behind insolation, deeper water changes from months up to a millennium in arrears. We may only get reliable figures for sensitivity a century after CO2 levels stabilise. GFSM only knows when that will be.

Jan 16, 2013 at 3:25 PM | Unregistered CommenterEntropic Man

Entropic Man -
Agreed; and I intended it for the same purpose.

P.S. It may not be wise to invoke the FSM on a Bishop's blog. ;)

Jan 16, 2013 at 6:30 PM | Registered CommenterHaroldW

Jonathan Jones
"But at this point I am getting out of my depth and hope that Nic will take over!"

I am fully comitted until Friday, so I have only been able to quickly read your explanations, which seem excellent, and will just respond very briefly.

Jaynes maximum entropy method is generally thought to succeed in the discrete case, but his attempts to extend it to the continuous case (even if discretised, as you say) were not successful. However, the Jeffreys' prior does seem to in general solve the problem in the continuous case, dealing properly with non-linear data-parameter relationships and usually matching frequentist (classical) probability confidence intervals, where only a single parameter is being estimated (and, I think, for joint estimation of all parameters being estimated).

The difficult problem arises where multiple parameters are being estimated, data-parameter relationships are non-linear, and one is seeking separate (marginal) posterior estimates for individual parameters (or subsets of the parameter list). Here, Jeffreys' original prior can go badly wrong. In some such cases a noninfomative prior can be found using group relationships and measure theory - the so called 'right Haar measure' prior, and that provides objective inference. In other cases, the reference prior approach developed by Bernardo and Berger is usually thought best, as it minimises the information provided by the prior (at least asymptotically).

Not a simple subject by any means! And properly understood by hardly any climate scientists, sadly.

Jan 16, 2013 at 7:11 PM | Unregistered CommenterNic Lewis

Jonathan Jones - your comments are always worth reading, thanks for taking the time.

Jan 16, 2013 at 7:49 PM | Unregistered Commenternvw

I am about a third through "The theory that would not die" and thoroughly enjoying it, so thanks for the recommendations.

The historical perspective is particularly interesting, with respect to the clashes between the Frequentists (i.e the "classical" statisticians) and the Bayesians. It's a good reminder that in-fighting between academics is nothing new. Alan Turing's story is particularly saddening.

In terms of the general debate on Bayes in climate science, I am trying to see through the theoretical material for some worked examples of how priors are used in practice.

Am I right that model runs are used as priors, in addition to observed data?

Jan 16, 2013 at 8:34 PM | Registered CommenterAndy Scrase

Dear Nic,

I assume you've seen e.g. http://www.nature.com/news/soot-a-major-contributor-to-climate-change-1.12225

It is being said this is a doubling of previous estimates of the black carbon RF - is the radiative effect normally counted by the IPCC as part of the direct aerosol effect? Is that really a doubling relative to the AR4 figure?

Anyway the real question is - would this result about soot causing 1.1 W/m^2 of warming - if it stands up to scrutiny - affect your result? Or was it already accounted for in the AR5?

Kind regards,
Alex Harvey

Jan 17, 2013 at 2:37 PM | Unregistered CommenterAlex Harvey

Alex

doesn't that make it a little harder for the committed to blame all the warming on CO2?

Jan 17, 2013 at 3:23 PM | Unregistered Commenterdiogenes

diogenes, that's exactly what I am trying to find out.

Jan 17, 2013 at 3:33 PM | Unregistered CommenterAlex Harvey

HaroldW

One of the joys of science is playing with numbers.

My favourite example, albeit fictional, is in "Lucifer's Hammer". Larry Niven portrays a television talk show. The subject is a comet which might hit the Earth.

During the show two scientists decide to bombard the Earth with a cubic mile of Hot Fudge Sundae and work out the effect, just for the fun of it.

http://www.baenebooks.com/chapters/0872234878/0872234878.htm?blurb

Jan 17, 2013 at 5:18 PM | Unregistered CommenterEntropic Man

Thank you, Jonathan Jones.

No I start to understand how the use of a uniform prior in the range 0–18.5°C has forced sensitivity upwards and produced fat tails. That prior assumption has a mean of 9.25°C !

-------------------------------------

"So what did the IPCC do to the original Forster/Gregory 06 results? The key lies in an innocuous sounding note in AR4:WG1concerning this study, below Figure 9.20: the IPCC have “transformed to a uniform prior distribution in ECS” (climate sensitivity), in the range 0–18.5°C. By using this prior distribution, the IPCC have imposed a starting assumption, before any data is gathered, that all climate sensitivities between 0°C and 18.5°C are equally likely, with other values ruled out."

http://judithcurry.com/2011/07/05/the-ipccs-alteration-of-forster-gregorys-model-independent-climate-sensitivity-results/

Jan 17, 2013 at 7:44 PM | Unregistered CommenterManfred

Would an iterative compution of sensitivity give a better result ?

(Start with anything, such as a uniform prior,
compute new probability function,
use result as new prior,
compute new probability function,
etc.
until it converges.)

Jan 17, 2013 at 9:16 PM | Unregistered CommenterManfred

Entropic Man -
Thanks! It has been quite a while since I read that book. Got a chance to read that section again, and a little more.

Jan 17, 2013 at 11:53 PM | Registered CommenterHaroldW

I found this pdf of a presentation by Magne Aldrin who used a simple climate model and Bayesian techniques to estimate sensitivity. This might be of some relevance here

I noticed that they used mean values in their graphs, which looked a bit odd when the modal value was clearly quite a bit less

Jan 18, 2013 at 2:56 AM | Registered CommenterAndy Scrase

Nic,

For a non-specialsit with no understanding of the stats methods you are discussing, could you please explain what is the sensitivity of ECS (mode, median, mean and SD or whatever are the most important parameters to define the pdf) to the various methods of calculating it that you have been discussing?

Put another way, how much difference would using a 'more correct' method make to the mode, median, mean and range?

Also, even if the statement of likely range is 2 C to 4.5 C and best estimate of 3 C is wrong, does that actually change the model projections? The reason I ask is because I understand the ECS figures are an output of the modelling not an input to the modelling. In this case it would seem to me that the figures quoted for ECS in IPCC AR5 might change but the projections would not.

Could you please explain in a way a non-specialist can understand.

Jan 18, 2013 at 4:54 AM | Unregistered CommenterPeter Lang

I'm not sure if this paper has been referenced on this thread yet, but Frame et al seems a well cited paper on priors in climate prediction, and also mentions the Betrand Paradox that Jonathan Jones posted about.

http://www.climateprediction.net/science/pubs/2004GL022241.pdf

Jan 18, 2013 at 8:43 AM | Unregistered CommenterAndy scrase

Andy Scrase
"Frame et al seems a well cited paper on priors in climate prediction"

Or badly cited, depending on one's viewpoint! It is the only place I have ever read the proposal that the prior used for estimating climate sensitivity (or indeed any parameter in any field) should depend on the purpose to which the estimate is to be put. That seems to me to go against both probability theory and scientific principles.

I can understand multiplying a probabilistic parameter estimate (a posterior PDF, in a Bayesian context) different loss functions, according to the use of the estimate, but not changing the estimate itself - which is what Frame et al (2005) advocate, through varying the prior used.

I would be interested in the views on this point of any scientists and/or statisticians who read this.

Jan 18, 2013 at 5:00 PM | Unregistered CommenterNic Lewis

Peter Lang
"how much difference would using a 'more correct' method make to the mode, median, mean and range?"

It depends on how badly wrong the existing method is and how tightly the data constrain the parameter(s) one it trying to estimate. The less well the data constrain the parameters, the more difference the prior used makes (I am assuming a Bayesian approach). Climate sensitivity is not currently well constrained by the data, so the prior used makes a large difference.

For a dicussion, with graphs, of how changing to what was, in this instance, certainly the correct prior affects the estimated PDF for climate sensitivity), see:

http://judithcurry.com/2011/07/05/the-ipccs-alteration-of-forster-gregorys-model-independent-climate-sensitivity-results/

Forster & Gregory was a case where the data constrain the estimate more tightly than in most studies (resulting in a relatively narrow PDF), but even so the effect is substantial.

"Also, even if the statement of likely range is 2 C to 4.5 C and best estimate of 3 C is wrong, does that actually change the model projections?"

No, but it would prove that the models are wrong, and the next, corrected, generation of models should have lower projections of future warming (assuming the best estimate of sensitivity became well below 3 C).

Jan 18, 2013 at 5:27 PM | Unregistered CommenterNic Lewis

Andy Scrase

"I found this pdf of a presentation by Magne Aldrin who used a simple climate model and Bayesian techniques to estimate sensitivity. This might be of some relevance here

I noticed that they used mean values in their graphs, which looked a bit odd when the modal value was clearly quite a bit less"

I agree. I think the Aldrin study is commendably thorough and well documented. But the mean is not a good central estimate for a parameter with a highly skeewed distribution, like climate sensitivity.
When citing the results of the Aldrin study, I instead gave the mode (a bit over 1.55 C, whereas the mean is 2.0 C), to Aldrin's annoyance.

The median has the advantage over the mode of being invariant under reparameterisation: if the PDF for S, with median z, is converted into a PDF for Y= f(S), where f(x) is any monotonic function of x, the median of the new PDF for Y is f(z).

However, the mode of a Bayesian posterior distribution corresponds most closely to the estimate given by a maximum likelihood estimator (it is identical thereto if a uniform prior is used). MLEs are widely used: e.g., for the Ring et al (2012) estimate of climate sensitivity (1.6 C using the HadCRUT4 dataset). The mode is also less sensitive to the prior used than is the median or (even more so) the mean.

Jan 18, 2013 at 9:22 PM | Unregistered CommenterNic Lewis

Apologies for coming late to the party.

Nic, in your piece here: http://www.webcitation.org/6DNLRIeJH you conclude:
"In the light of the current observational evidence, in my view 1.75°C would be a more reasonable central estimate for ECS than 3°C, perhaps with a 'likely' range of around 1.25–2.75°C".

In other words, it is "likely" in your view that if we continue our outputs of CO2 unabated to 2050, we are likely to commit our children to living in a world that is more than 2C warmer than we have been used to.

Of course, before anyone else says it, that may not come about. The result may equally likely be 1.25C.
But 2C^ is "likely".

A 2C increase is viewed as bringing about serious changes in global climate as far as we humans are concerned.

I'm just saying.

Jan 26, 2013 at 5:37 PM | Unregistered CommenterRichard Lawson

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>