Feeds

Bugs in beta weather model used to trash climate science

Shock revelation: devs test complex code on more than one super

3 Big data security analytics techniques

Development work on a not-yet-prime-time weather forecasting model has been seized on as proof that climate models can't be trusted.

The reason? Folks who aren't keen on climate change discovered this paper in the journal of the American Meteorological Society, in which Song-You Hong of South Korea's Yonsei University Department of Atmospheric Sciences runs some tests over a weather model called GRIMs (Global/Regional Integrated Model).

Weather forecasting (as is climate modelling, but that's a different story) is one of the default workloads of high-performance computing, and consumes a significant slice of the world's supercomputer processor times at any given moment.

What Hong has documented, and what has been seized on by Anthony Watts of Wattsupwiththat, is that the GRIMs model, when run under different HPC environments, produces different results. As he puts it in the abstract of the paper:

“The system dependency, which is the standard deviation of the 500-hPa geopotential height averaged over the globe, increases with time. However, its fractional tendency, which is the change of the standard deviation relative to the value itself, remains nearly zero with time. In a seasonal prediction framework, the ensemble spread due to the differences in software system is comparable to the ensemble spread due to the differences in initial conditions that is used for the traditional ensemble forecasting.”

The reason, he states, is due to how different environments handle rounding – and that has Wattsupwiththat particularly excited: “It makes you wonder if some of the catastrophic future projections are simply due to a rounding error.”

Watts reproduces the table below as proof of how bad things are.

GRIM test results

Smoking gun? No, just testing unfinished weather forecast models

on different machines. Image: An Evaluation of the

Software System Dependency of a Global Atmospheric Model

Hong, et al

As noted William Connelly over at ScienceBlogs: “trivial differences in initial conditions, or in processing methods, will lead to divergences in weather forecasts”, which is something that “dates back to Lorenz’s original stuff on chaos”.

Just as interesting to The Register is that a little bit of further research suggests that the model under test in Song-You Hong's paper is relatively new. Here, for example, is a paper describing the model, prepared for the First GRIMs Workshop in 2011.

As is clear from this paper (slide 5), the models Hong is testing were first designed in 2008, are still under development, and GRIM is slated for use in weather forecasting … in 2015.

In other words, the reason for conducting a test such as Hong's seems to be that he's working on a new model, and it's being tested in different computing environments to identify ways in which the model's code needs to be polished to make sure it produces consistent results in different environments.

Chris Samuel, a Melbourne-based HPC senior system administrator working in Melbourne, told The Register it's not unusual to want to test against different environments, because complex environments offer myriad opportunities for divergences to creep in.

The authors are working to see if the program produces the same results at different scales, and Samuel noted that in the paper, Hong says the tests identified a bug in the weather code.

Divergence between different systems isn't a new issue, he said. Both sysadmins and users will, in fact, use a range of strategies to address this.

One is to have many parallel installations using different versions of packages, libraries, and compilers, so that “users can pick what they want to build against,” he said.

Another defence is to pick a code version and stick with it. Yet another is to do testing on virtual machines, “but that, of course, doesn't necessarily play so well with classic HPC jobs”.

And even then, “you have OS distribution churn underneath all that to complicate matters further.”

In such a fluid world, testing seems prudent, at least to The Register. ®

SANS - Survey on application security programs

More from The Register

next story
This time it's 'Personal': new Office 365 sub covers just two devices
Redmond also brings Office into Google's back yard
Dropbox defends fantastically badly timed Condoleezza Rice appointment
'Nothing is going to change with Dr. Rice's appointment,' file sharer promises
Bored with trading oil and gold? Why not flog some CLOUD servers?
Chicago Mercantile Exchange plans cloud spot exchange
Just what could be inside Dropbox's new 'Home For Life'?
Biz apps, messaging, photos, email, more storage – sorry, did you think there would be cake?
IT bods: How long does it take YOU to train up on new tech?
I'll leave my arrays to do the hard work, if you don't mind
Amazon reveals its Google-killing 'R3' server instances
A mega-memory instance that never forgets
Cisco reps flog Whiptail's Invicta arrays against EMC and Pure
Storage reseller report reveals who's selling what
prev story

Whitepapers

Designing a defence for mobile apps
In this whitepaper learn the various considerations for defending mobile applications; from the mobile application architecture itself to the myriad testing technologies needed to properly assess mobile applications risk.
3 Big data security analytics techniques
Applying these Big Data security analytics techniques can help you make your business safer by detecting attacks early, before significant damage is done.
Five 3D headsets to be won!
We were so impressed by the Durovis Dive headset we’ve asked the company to give some away to Reg readers.
The benefits of software based PBX
Why you should break free from your proprietary PBX and how to leverage your existing server hardware.
Securing web applications made simple and scalable
In this whitepaper learn how automated security testing can provide a simple and scalable way to protect your web applications.