Feeds

Bugs in beta weather model used to trash climate science

Shock revelation: devs test complex code on more than one super

Build a business case: developing custom apps

Development work on a not-yet-prime-time weather forecasting model has been seized on as proof that climate models can't be trusted.

The reason? Folks who aren't keen on climate change discovered this paper in the journal of the American Meteorological Society, in which Song-You Hong of South Korea's Yonsei University Department of Atmospheric Sciences runs some tests over a weather model called GRIMs (Global/Regional Integrated Model).

Weather forecasting (as is climate modelling, but that's a different story) is one of the default workloads of high-performance computing, and consumes a significant slice of the world's supercomputer processor times at any given moment.

What Hong has documented, and what has been seized on by Anthony Watts of Wattsupwiththat, is that the GRIMs model, when run under different HPC environments, produces different results. As he puts it in the abstract of the paper:

“The system dependency, which is the standard deviation of the 500-hPa geopotential height averaged over the globe, increases with time. However, its fractional tendency, which is the change of the standard deviation relative to the value itself, remains nearly zero with time. In a seasonal prediction framework, the ensemble spread due to the differences in software system is comparable to the ensemble spread due to the differences in initial conditions that is used for the traditional ensemble forecasting.”

The reason, he states, is due to how different environments handle rounding – and that has Wattsupwiththat particularly excited: “It makes you wonder if some of the catastrophic future projections are simply due to a rounding error.”

Watts reproduces the table below as proof of how bad things are.

GRIM test results

Smoking gun? No, just testing unfinished weather forecast models

on different machines. Image: An Evaluation of the

Software System Dependency of a Global Atmospheric Model

Hong, et al

As noted William Connelly over at ScienceBlogs: “trivial differences in initial conditions, or in processing methods, will lead to divergences in weather forecasts”, which is something that “dates back to Lorenz’s original stuff on chaos”.

Just as interesting to The Register is that a little bit of further research suggests that the model under test in Song-You Hong's paper is relatively new. Here, for example, is a paper describing the model, prepared for the First GRIMs Workshop in 2011.

As is clear from this paper (slide 5), the models Hong is testing were first designed in 2008, are still under development, and GRIM is slated for use in weather forecasting … in 2015.

In other words, the reason for conducting a test such as Hong's seems to be that he's working on a new model, and it's being tested in different computing environments to identify ways in which the model's code needs to be polished to make sure it produces consistent results in different environments.

Chris Samuel, a Melbourne-based HPC senior system administrator working in Melbourne, told The Register it's not unusual to want to test against different environments, because complex environments offer myriad opportunities for divergences to creep in.

The authors are working to see if the program produces the same results at different scales, and Samuel noted that in the paper, Hong says the tests identified a bug in the weather code.

Divergence between different systems isn't a new issue, he said. Both sysadmins and users will, in fact, use a range of strategies to address this.

One is to have many parallel installations using different versions of packages, libraries, and compilers, so that “users can pick what they want to build against,” he said.

Another defence is to pick a code version and stick with it. Yet another is to do testing on virtual machines, “but that, of course, doesn't necessarily play so well with classic HPC jobs”.

And even then, “you have OS distribution churn underneath all that to complicate matters further.”

In such a fluid world, testing seems prudent, at least to The Register. ®

Boost IT visibility and business value

More from The Register

next story
Sysadmin Day 2014: Quick, there's still time to get the beers in
He walked over the broken glass, killed the thugs... and er... reconnected the cables*
Auntie remains MYSTIFIED by that weekend BBC iPlayer and website outage
Still doing 'forensics' on the caching layer – Beeb digi wonk
SHOCK and AWS: The fall of Amazon's deflationary cloud
Just as Jeff Bezos did to books and CDs, Amazon's rivals are now doing to it
VVOL update: Are any vendors NOT leaping into bed with VMware?
It's not yet been released but everyone thinks it's the dog's danglies
BlackBerry: Toss the server, mate... BES is in the CLOUD now
BlackBerry Enterprise Services takes aim at SMEs - but there's a catch
prev story

Whitepapers

Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
The Essential Guide to IT Transformation
ServiceNow discusses three IT transformations that can help CIO's automate IT services to transform IT and the enterprise.
Consolidation: The Foundation for IT Business Transformation
In this whitepaper learn how effective consolidation of IT and business resources can enable multiple, meaningful business benefits.
How modern custom applications can spur business growth
Learn how to create, deploy and manage custom applications without consuming or expanding the need for scarce, expensive IT resources.
Build a business case: developing custom apps
Learn how to maximize the value of custom applications by accelerating and simplifying their development.