Cloudy outlook for climate models
More aerosols - the solution to global warming?
Climate models appear to be missing an atmospheric ingredient, a new study suggests.
December's issue of the International Journal of Climatology from the Royal Meteorlogical Society contains a study of computer models used in climate forecasting. The study is by joint authors Douglass, Christy, Pearson, and Singer - of whom only the third mentioned is not entitled to the prefix Professor.
Their topic is the discrepancy between troposphere observations from 1979 and 2004, and what computer models have to say about the temperature trends over the same period. While focusing on tropical latitudes between 30 degrees north and south (mostly to 20 degrees N and S), because, they write - "much of the Earth's global mean temperature variability originates in the tropics" - the authors nevertheless crunched through an unprecedented amount of historical and computational data in making their comparison.
For observational data they make use of ten different data sets, including ground and atmospheric readings at different heights.
On the modelling side, they use the 22 computer models which participated in the IPCC-sponsored Program for Climate Model Diagnosis and Intercomparison. Some models were run several times, to produce a total of 67 realisations of temperature trends. The IPCC is the United Nation's Intergovernmental Panel on Climate Change and published their Fourth Assessment Report [PDF, 7.8MB] earlier this year. Their model comparison program uses a common set of forcing factors.
Notable in the paper is a generosity when calculating a figure for statistical uncertainty for the data from the models. In aggregating the models, the uncertainty is derived from plugging the number 22 into the maths, rather than 67. The effect of using 67 would be to confine the latitude of error closer to the average trend - with the implication of making it harder to reconcile any discrepancy with the observations. In addition, when they plot and compare the observational and computed data, they also double this error interval.
So to the burning question: on their analysis, does the uncertainty in the observations overlap with the results of the models? If yes, then the models are supported by the observations of the last 30 years, and they could be useful predictors of future temperature and climate trends.
Unfortunately, the answer according to the study is no. Figure 1 in the published paper available here[PDF] pretty much tells the story.
Douglass et al. Temperature time trends (degrees per decade) against pressure (altitutude) for 22 averaged models (shown in red) and 10 observational data sets (blue and green lines). Only at the surface are the mean of the models and the mean of observations seen to agree, within the uncertainties.
While trends coincide at the surface, at all heights in the troposphere, the computer models indicate that higher trending temperatures should have occurred. And more significantly, there is no overlap between the uncertainty ranges of the observations and those of the models.
In other words, the observations and the models seem to be telling quite different stories about the atmosphere, at least as far as the tropics are concerned.
So can the disparities be reconciled?
Disappointing responses from Mr. Chase
Ten days have now elapsed since I invited Mr Chase to address two of the article's main points in 200 words or less.
1 - The study finds that the models are contradicted by empirical evidence ...tropospheric models only work at sea level
2 - The IPCC says it has only a "LOW" understanding of the role of particulate matter, and that the cooling effect of particulate matter is as large as the heating effect of greenhouse gas.
Mr.Chase has now posted 30,000 words in response: almost all of it irrelevant to the points questioners have raised.
Therefore I see nothing to contradict the Mr Wylie's conclusion that -
"on both empirical and inferential grounds, then, the science of climate looks to be far from over."
When I am called upon to mark student papers, I look for relevance and logic - there is very little of either from Mr Chase. I would mark this as a "fail".
Water vapour sensitivity
It doesn't matter if the earth takes a long time to respond to small increases in atmospheric water vapour. It has had a very long time to do so - more than enough if it was going to.
Clearly it is held in check by delicate balances involving huge convection systems of both air and water; temperature, pressure and gravity gradients and cloud seeding factors. The resulting distribution of clouds and temperatures also affects the radiation balances.
The critical question is whether these balances are sensitive to CO2 and if so to what extent.
Despite your confidence the immediate historical record of temperature change is not a clear correlation with CO2 levels at all. The ending of the mini-iceage and the temperature decline for 3 decades after WW2 muddy the water considerably. The paleoclimate evidence requires even more circumspection regarding its assumptions, accuracy and consistency.
Neither are the model predictions the unmitigated success you portray. There are a number of interesting papers here discussing important inconsistencies in the models compared with actual observations:
Uncertainties about clouds, ice and circulation patterns play large roles according to these and other papers. The deviations from predicted temperatures are significant relative to the small size of the CO2 warming effect as are deviations between the various models themselves.
Yes, there are reasons to believe CO2 may have a warming effect and that the earth is currently on a warming trend. Quantifying both is a different matter altogether. Consequently deciding what interventions if any are justified by the science is equally problematic.
Re: Science vs spin
Anonymous Coward wrote, "Science cannot predict because it can never be sure that all the factors have been accounted for or that new factors will not come to influence the situation. Science recognises that the past is no guide to the future and that the repetition of pairs of similar events in the same sequence does not entail any causal connection."
Sounds like philosophy 101. Hume, perhaps -- Reader's Digest version. Doubt it would go over all that well with engineers, electricians, or probably even the guys that make computer chips. In fact, I doubt it would be all that popular with the fellows who make nuclear bombs.
The people who build things, or in other cases blow things up. They want to know how things are going to behave - before they put them together. For that you need predictions. Not certainty, but a great deal of confidence. High probability. Close to 1 even. Or at the very least -- reliability. Especially with things that have a lot of pieces. Like that computer I presume you were sitting in front of when you typed on those keys.
Science is fallibilistic. It makes mistakes. But it is also self-correcting. And a conclusion justified by multiple, independent lines of investigation is often justified to a far greater degree than it would be by any one line regarded in isolation.
Science makes predictions based upon the best available evidence. When those predictions turn out to be wrong -- that's when scientists generally get excited -- because it means that there is something new to discover. Like a kid with a new toy.
But for your predictions to fail you have to be making them in the first place. Then when a prediction in fact fails you modify your theory or come up with an entirely new one, but preferably it should explain everything the earlier one did, making all the predictions that turned out to be right -- and succeed where the old theory failed.