Bugs in beta weather model used to trash climate science

Shock revelation: devs test complex code on more than one super

Next gen security for virtualised datacentres

Development work on a not-yet-prime-time weather forecasting model has been seized on as proof that climate models can't be trusted.

The reason? Folks who aren't keen on climate change discovered this paper in the journal of the American Meteorological Society, in which Song-You Hong of South Korea's Yonsei University Department of Atmospheric Sciences runs some tests over a weather model called GRIMs (Global/Regional Integrated Model).

Weather forecasting (as is climate modelling, but that's a different story) is one of the default workloads of high-performance computing, and consumes a significant slice of the world's supercomputer processor times at any given moment.

What Hong has documented, and what has been seized on by Anthony Watts of Wattsupwiththat, is that the GRIMs model, when run under different HPC environments, produces different results. As he puts it in the abstract of the paper:

“The system dependency, which is the standard deviation of the 500-hPa geopotential height averaged over the globe, increases with time. However, its fractional tendency, which is the change of the standard deviation relative to the value itself, remains nearly zero with time. In a seasonal prediction framework, the ensemble spread due to the differences in software system is comparable to the ensemble spread due to the differences in initial conditions that is used for the traditional ensemble forecasting.”

The reason, he states, is due to how different environments handle rounding – and that has Wattsupwiththat particularly excited: “It makes you wonder if some of the catastrophic future projections are simply due to a rounding error.”

Watts reproduces the table below as proof of how bad things are.

GRIM test results

Smoking gun? No, just testing unfinished weather forecast models

on different machines. Image: An Evaluation of the

Software System Dependency of a Global Atmospheric Model

Hong, et al

As noted William Connelly over at ScienceBlogs: “trivial differences in initial conditions, or in processing methods, will lead to divergences in weather forecasts”, which is something that “dates back to Lorenz’s original stuff on chaos”.

Just as interesting to The Register is that a little bit of further research suggests that the model under test in Song-You Hong's paper is relatively new. Here, for example, is a paper describing the model, prepared for the First GRIMs Workshop in 2011.

As is clear from this paper (slide 5), the models Hong is testing were first designed in 2008, are still under development, and GRIM is slated for use in weather forecasting … in 2015.

In other words, the reason for conducting a test such as Hong's seems to be that he's working on a new model, and it's being tested in different computing environments to identify ways in which the model's code needs to be polished to make sure it produces consistent results in different environments.

Chris Samuel, a Melbourne-based HPC senior system administrator working in Melbourne, told The Register it's not unusual to want to test against different environments, because complex environments offer myriad opportunities for divergences to creep in.

The authors are working to see if the program produces the same results at different scales, and Samuel noted that in the paper, Hong says the tests identified a bug in the weather code.

Divergence between different systems isn't a new issue, he said. Both sysadmins and users will, in fact, use a range of strategies to address this.

One is to have many parallel installations using different versions of packages, libraries, and compilers, so that “users can pick what they want to build against,” he said.

Another defence is to pick a code version and stick with it. Yet another is to do testing on virtual machines, “but that, of course, doesn't necessarily play so well with classic HPC jobs”.

And even then, “you have OS distribution churn underneath all that to complicate matters further.”

In such a fluid world, testing seems prudent, at least to The Register. ®

The essential guide to IT transformation

More from The Register

next story
The Return of BSOD: Does ANYONE trust Microsoft patches?
Sysadmins, you're either fighting fires or seen as incompetents now
Microsoft: Azure isn't ready for biz-critical apps … yet
Microsoft will move its own IT to the cloud to avoid $200m server bill
Death by 1,000 cuts: Mainstream storage array suppliers are bleeding
Cloud, all-flash kit, object storage slicing away at titans of storage
US regulators OK sale of IBM's x86 server biz to Lenovo
Now all that remains is for gov't offices to ban the boxes
Oracle reveals 32-core, 10 BEEELLION-transistor SPARC M7
New chip scales to 1024 cores, 8192 threads 64 TB RAM, at speeds over 3.6GHz
VMware vaporises vCHS hybrid cloud service
AnD yEt mOre cRazy cAps to dEal wIth
prev story


Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
7 Elements of Radically Simple OS Migration
Avoid the typical headaches of OS migration during your next project by learning about 7 elements of radically simple OS migration.
BYOD's dark side: Data protection
An endpoint data protection solution that adds value to the user and the organization so it can protect itself from data loss as well as leverage corporate data.
Consolidation: The Foundation for IT Business Transformation
In this whitepaper learn how effective consolidation of IT and business resources can enable multiple, meaningful business benefits.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?