Feeds

Grid computing meets data flow challenge

A significant milestone

  • alert
  • submit to reddit

High performance access to file storage

Scientists at CERN announced yesterday that eight major computing centres have managed to sustain an average continuous data flow of 600 megabytes per second for 10 days. It is a significant milestone for scientific grid computing.

The total volume of data transmitted between CERN, the European Organisation for Nuclear Research near Geneva, and seven sites in the US and Europe - amounting to 500 terabytes - would take about 250 years to download using a typical 512-kilobit per second household broadband connection.

In basic terms, grid computing can be described as a network of computers and data storage systems, brought together to share computing power. Where a computer is not being used, or is using only a fraction of its power, the grid will allow that power to be used by someone else.

The concept differs from the World Wide Web, which only enables communication through browsers, because it actually allows access to computer resources. It is also different from peer-to-peer computing, which enables file-sharing between two users, because it allows sharing of resources among many, not just two.

The potential of computer grids is enormous and when the concept becomes mainstream it holds the promise of transforming the computer power available to the individual. At present, a computer user is restricted by the power of his own computer. When the grid comes on line there will be no restrictions: the cheapest, oldest model will have access to the computing resources of millions of other computers worldwide.

Scientists at CERN are collaborating with scientists worldwide in the creation of what is hoped will be the world's largest computer grid, in order to analyse the massive volume of data that will be produced when CERN's latest and largest ever particle accelerator (known as the Large Hadron Collider, or LHC) becomes operational in 2007.

The exercise completed yesterday was the second in a series of four service challenges designed to ramp up to the level of computing capacity, reliability and ease of use that will be required by the worldwide community of over 6000 scientists working on the LHC experiments.

Other participants included Brookhaven National Laboratory and Fermi National Accelerator Laboratory (Fermilab) in the US, Forschungszentrum Karlsruhe in Germany, CCIN2P3 in France, INFN-CNAF in Italy, SARA/NIKHEF in the Netherlands and Rutherford Appleton Laboratory in the UK.

"This service challenge is a key step on the way to managing the torrents of data anticipated from the LHC," said Jamie Shiers, manager of the service challenges at CERN. "When the LHC starts operating in 2007, it will be the most data-intensive physics instrument on the planet, producing more than 1500 megabytes of data every second for over a decade."

Fermilab Computing Division head Vicky White welcomed the results of the service challenge.

"High energy physicists have been transmitting large amounts of data around the world for years," she said. "But this has usually been in relatively brief bursts and between two sites. Sustaining such high rates of data for days on end to multiple sites is a breakthrough, and augurs well for achieving the ultimate goals of LHC computing."

In fact the test exceeded expectations by sustaining roughly one-third of the ultimate data rate from the LHC, and reaching peak rates of over 800 megabytes per second.

The next service challenge, due to start in the summer, will extend to many other computing centres and aim at a three-month period of stable operations. That challenge will allow many of the scientists involved to test their computing models for handling and analysing the data from the LHC experiments.

Copyright © 2005, OUT-LAW.com

Related stories

Sun opens processor auction house
Dutch turn town into supercomputer
Globus Consortium takes grid computing to the office

High performance access to file storage

More from The Register

next story
European Court of Justice rips up Data Retention Directive
Rules 'interfering' measure to be 'invalid'
Dropbox defends fantastically badly timed Condoleezza Rice appointment
'Nothing is going to change with Dr. Rice's appointment,' file sharer promises
Cisco reps flog Whiptail's Invicta arrays against EMC and Pure
Storage reseller report reveals who's selling what
This time it's 'Personal': new Office 365 sub covers just two devices
Redmond also brings Office into Google's back yard
Bored with trading oil and gold? Why not flog some CLOUD servers?
Chicago Mercantile Exchange plans cloud spot exchange
Just what could be inside Dropbox's new 'Home For Life'?
Biz apps, messaging, photos, email, more storage – sorry, did you think there would be cake?
IT bods: How long does it take YOU to train up on new tech?
I'll leave my arrays to do the hard work, if you don't mind
prev story

Whitepapers

Securing web applications made simple and scalable
In this whitepaper learn how automated security testing can provide a simple and scalable way to protect your web applications.
Five 3D headsets to be won!
We were so impressed by the Durovis Dive headset we’ve asked the company to give some away to Reg readers.
HP ArcSight ESM solution helps Finansbank
Based on their experience using HP ArcSight Enterprise Security Manager for IT security operations, Finansbank moved to HP ArcSight ESM for fraud management.
The benefits of software based PBX
Why you should break free from your proprietary PBX and how to leverage your existing server hardware.
Mobile application security study
Download this report to see the alarming realities regarding the sheer number of applications vulnerable to attack, as well as the most common and easily addressable vulnerability errors.