Feeds

Bugs in beta weather model used to trash climate science

Shock revelation: devs test complex code on more than one super

Security for virtualized datacentres

Development work on a not-yet-prime-time weather forecasting model has been seized on as proof that climate models can't be trusted.

The reason? Folks who aren't keen on climate change discovered this paper in the journal of the American Meteorological Society, in which Song-You Hong of South Korea's Yonsei University Department of Atmospheric Sciences runs some tests over a weather model called GRIMs (Global/Regional Integrated Model).

Weather forecasting (as is climate modelling, but that's a different story) is one of the default workloads of high-performance computing, and consumes a significant slice of the world's supercomputer processor times at any given moment.

What Hong has documented, and what has been seized on by Anthony Watts of Wattsupwiththat, is that the GRIMs model, when run under different HPC environments, produces different results. As he puts it in the abstract of the paper:

“The system dependency, which is the standard deviation of the 500-hPa geopotential height averaged over the globe, increases with time. However, its fractional tendency, which is the change of the standard deviation relative to the value itself, remains nearly zero with time. In a seasonal prediction framework, the ensemble spread due to the differences in software system is comparable to the ensemble spread due to the differences in initial conditions that is used for the traditional ensemble forecasting.”

The reason, he states, is due to how different environments handle rounding – and that has Wattsupwiththat particularly excited: “It makes you wonder if some of the catastrophic future projections are simply due to a rounding error.”

Watts reproduces the table below as proof of how bad things are.

GRIM test results

Smoking gun? No, just testing unfinished weather forecast models

on different machines. Image: An Evaluation of the

Software System Dependency of a Global Atmospheric Model

Hong, et al

As noted William Connelly over at ScienceBlogs: “trivial differences in initial conditions, or in processing methods, will lead to divergences in weather forecasts”, which is something that “dates back to Lorenz’s original stuff on chaos”.

Just as interesting to The Register is that a little bit of further research suggests that the model under test in Song-You Hong's paper is relatively new. Here, for example, is a paper describing the model, prepared for the First GRIMs Workshop in 2011.

As is clear from this paper (slide 5), the models Hong is testing were first designed in 2008, are still under development, and GRIM is slated for use in weather forecasting … in 2015.

In other words, the reason for conducting a test such as Hong's seems to be that he's working on a new model, and it's being tested in different computing environments to identify ways in which the model's code needs to be polished to make sure it produces consistent results in different environments.

Chris Samuel, a Melbourne-based HPC senior system administrator working in Melbourne, told The Register it's not unusual to want to test against different environments, because complex environments offer myriad opportunities for divergences to creep in.

The authors are working to see if the program produces the same results at different scales, and Samuel noted that in the paper, Hong says the tests identified a bug in the weather code.

Divergence between different systems isn't a new issue, he said. Both sysadmins and users will, in fact, use a range of strategies to address this.

One is to have many parallel installations using different versions of packages, libraries, and compilers, so that “users can pick what they want to build against,” he said.

Another defence is to pick a code version and stick with it. Yet another is to do testing on virtual machines, “but that, of course, doesn't necessarily play so well with classic HPC jobs”.

And even then, “you have OS distribution churn underneath all that to complicate matters further.”

In such a fluid world, testing seems prudent, at least to The Register. ®

Providing a secure and efficient Helpdesk

More from The Register

next story
Docker's app containers are coming to Windows Server, says Microsoft
MS chases app deployment speeds already enjoyed by Linux devs
IBM storage revenues sink: 'We are disappointed,' says CEO
Time to put the storage biz up for sale?
'Hmm, why CAN'T I run a water pipe through that rack of media servers?'
Leaving Las Vegas for Armenia kludging and Dubai dune bashing
Facebook slurps 'paste sites' for STOLEN passwords, sprinkles on hash and salt
Zuck's ad empire DOESN'T see details in plain text. Phew!
Windows 10: Forget Cloudobile, put Security and Privacy First
But - dammit - It would be insane to say 'don't collect, because NSA'
Symantec backs out of Backup Exec: Plans to can appliance in Jan
Will still provide support to existing customers
prev story

Whitepapers

Forging a new future with identity relationship management
Learn about ForgeRock's next generation IRM platform and how it is designed to empower CEOS's and enterprises to engage with consumers.
Why and how to choose the right cloud vendor
The benefits of cloud-based storage in your processes. Eliminate onsite, disk-based backup and archiving in favor of cloud-based data protection.
Three 1TB solid state scorchers up for grabs
Big SSDs can be expensive but think big and think free because you could be the lucky winner of one of three 1TB Samsung SSD 840 EVO drives that we’re giving away worth over £300 apiece.
Reg Reader Research: SaaS based Email and Office Productivity Tools
Read this Reg reader report which provides advice and guidance for SMBs towards the use of SaaS based email and Office productivity tools.
Security for virtualized datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.