Feeds

Grid computing meets data flow challenge

A significant milestone

  • alert
  • submit to reddit

Top 5 reasons to deploy VMware with Tegile

Scientists at CERN announced yesterday that eight major computing centres have managed to sustain an average continuous data flow of 600 megabytes per second for 10 days. It is a significant milestone for scientific grid computing.

The total volume of data transmitted between CERN, the European Organisation for Nuclear Research near Geneva, and seven sites in the US and Europe - amounting to 500 terabytes - would take about 250 years to download using a typical 512-kilobit per second household broadband connection.

In basic terms, grid computing can be described as a network of computers and data storage systems, brought together to share computing power. Where a computer is not being used, or is using only a fraction of its power, the grid will allow that power to be used by someone else.

The concept differs from the World Wide Web, which only enables communication through browsers, because it actually allows access to computer resources. It is also different from peer-to-peer computing, which enables file-sharing between two users, because it allows sharing of resources among many, not just two.

The potential of computer grids is enormous and when the concept becomes mainstream it holds the promise of transforming the computer power available to the individual. At present, a computer user is restricted by the power of his own computer. When the grid comes on line there will be no restrictions: the cheapest, oldest model will have access to the computing resources of millions of other computers worldwide.

Scientists at CERN are collaborating with scientists worldwide in the creation of what is hoped will be the world's largest computer grid, in order to analyse the massive volume of data that will be produced when CERN's latest and largest ever particle accelerator (known as the Large Hadron Collider, or LHC) becomes operational in 2007.

The exercise completed yesterday was the second in a series of four service challenges designed to ramp up to the level of computing capacity, reliability and ease of use that will be required by the worldwide community of over 6000 scientists working on the LHC experiments.

Other participants included Brookhaven National Laboratory and Fermi National Accelerator Laboratory (Fermilab) in the US, Forschungszentrum Karlsruhe in Germany, CCIN2P3 in France, INFN-CNAF in Italy, SARA/NIKHEF in the Netherlands and Rutherford Appleton Laboratory in the UK.

"This service challenge is a key step on the way to managing the torrents of data anticipated from the LHC," said Jamie Shiers, manager of the service challenges at CERN. "When the LHC starts operating in 2007, it will be the most data-intensive physics instrument on the planet, producing more than 1500 megabytes of data every second for over a decade."

Fermilab Computing Division head Vicky White welcomed the results of the service challenge.

"High energy physicists have been transmitting large amounts of data around the world for years," she said. "But this has usually been in relatively brief bursts and between two sites. Sustaining such high rates of data for days on end to multiple sites is a breakthrough, and augurs well for achieving the ultimate goals of LHC computing."

In fact the test exceeded expectations by sustaining roughly one-third of the ultimate data rate from the LHC, and reaching peak rates of over 800 megabytes per second.

The next service challenge, due to start in the summer, will extend to many other computing centres and aim at a three-month period of stable operations. That challenge will allow many of the scientists involved to test their computing models for handling and analysing the data from the LHC experiments.

Copyright © 2005, OUT-LAW.com

Related stories

Sun opens processor auction house
Dutch turn town into supercomputer
Globus Consortium takes grid computing to the office

Beginner's guide to SSL certificates

More from The Register

next story
It's Big, it's Blue... it's simply FABLESS! IBM's chip-free future
Or why the reversal of globalisation ain't gonna 'appen
'Hmm, why CAN'T I run a water pipe through that rack of media servers?'
Leaving Las Vegas for Armenia kludging and Dubai dune bashing
Microsoft and Dell’s cloud in a box: Instant Azure for the data centre
A less painful way to run Microsoft’s private cloud
Facebook slurps 'paste sites' for STOLEN passwords, sprinkles on hash and salt
Zuck's ad empire DOESN'T see details in plain text. Phew!
CAGE MATCH: Microsoft, Dell open co-located bit barns in Oz
Whole new species of XaaS spawning in the antipodes
AWS pulls desktop-as-a-service from the PC
Support for PCoIP protocol means zero clients can run cloudy desktops
prev story

Whitepapers

Cloud and hybrid-cloud data protection for VMware
Learn how quick and easy it is to configure backups and perform restores for VMware environments.
A strategic approach to identity relationship management
ForgeRock commissioned Forrester to evaluate companies’ IAM practices and requirements when it comes to customer-facing scenarios versus employee-facing ones.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
Three 1TB solid state scorchers up for grabs
Big SSDs can be expensive but think big and think free because you could be the lucky winner of one of three 1TB Samsung SSD 840 EVO drives that we’re giving away worth over £300 apiece.
Security for virtualized datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.