Feeds

Grid computing meets data flow challenge

A significant milestone

  • alert
  • submit to reddit

7 Elements of Radically Simple OS Migration

Scientists at CERN announced yesterday that eight major computing centres have managed to sustain an average continuous data flow of 600 megabytes per second for 10 days. It is a significant milestone for scientific grid computing.

The total volume of data transmitted between CERN, the European Organisation for Nuclear Research near Geneva, and seven sites in the US and Europe - amounting to 500 terabytes - would take about 250 years to download using a typical 512-kilobit per second household broadband connection.

In basic terms, grid computing can be described as a network of computers and data storage systems, brought together to share computing power. Where a computer is not being used, or is using only a fraction of its power, the grid will allow that power to be used by someone else.

The concept differs from the World Wide Web, which only enables communication through browsers, because it actually allows access to computer resources. It is also different from peer-to-peer computing, which enables file-sharing between two users, because it allows sharing of resources among many, not just two.

The potential of computer grids is enormous and when the concept becomes mainstream it holds the promise of transforming the computer power available to the individual. At present, a computer user is restricted by the power of his own computer. When the grid comes on line there will be no restrictions: the cheapest, oldest model will have access to the computing resources of millions of other computers worldwide.

Scientists at CERN are collaborating with scientists worldwide in the creation of what is hoped will be the world's largest computer grid, in order to analyse the massive volume of data that will be produced when CERN's latest and largest ever particle accelerator (known as the Large Hadron Collider, or LHC) becomes operational in 2007.

The exercise completed yesterday was the second in a series of four service challenges designed to ramp up to the level of computing capacity, reliability and ease of use that will be required by the worldwide community of over 6000 scientists working on the LHC experiments.

Other participants included Brookhaven National Laboratory and Fermi National Accelerator Laboratory (Fermilab) in the US, Forschungszentrum Karlsruhe in Germany, CCIN2P3 in France, INFN-CNAF in Italy, SARA/NIKHEF in the Netherlands and Rutherford Appleton Laboratory in the UK.

"This service challenge is a key step on the way to managing the torrents of data anticipated from the LHC," said Jamie Shiers, manager of the service challenges at CERN. "When the LHC starts operating in 2007, it will be the most data-intensive physics instrument on the planet, producing more than 1500 megabytes of data every second for over a decade."

Fermilab Computing Division head Vicky White welcomed the results of the service challenge.

"High energy physicists have been transmitting large amounts of data around the world for years," she said. "But this has usually been in relatively brief bursts and between two sites. Sustaining such high rates of data for days on end to multiple sites is a breakthrough, and augurs well for achieving the ultimate goals of LHC computing."

In fact the test exceeded expectations by sustaining roughly one-third of the ultimate data rate from the LHC, and reaching peak rates of over 800 megabytes per second.

The next service challenge, due to start in the summer, will extend to many other computing centres and aim at a three-month period of stable operations. That challenge will allow many of the scientists involved to test their computing models for handling and analysing the data from the LHC experiments.

Copyright © 2005, OUT-LAW.com

Related stories

Sun opens processor auction house
Dutch turn town into supercomputer
Globus Consortium takes grid computing to the office

Best practices for enterprise data

More from The Register

next story
Sysadmin Day 2014: Quick, there's still time to get the beers in
He walked over the broken glass, killed the thugs... and er... reconnected the cables*
VMware builds product executables on 50 Mac Minis
And goes to the Genius Bar for support
Multipath TCP speeds up the internet so much that security breaks
Black Hat research says proposed protocol will bork network probes, flummox firewalls
Auntie remains MYSTIFIED by that weekend BBC iPlayer and website outage
Still doing 'forensics' on the caching layer – Beeb digi wonk
Microsoft's Euro cloud darkens: US FEDS can dig into foreign servers
They're not emails, they're business records, says court
Microsoft says 'weird things' can happen during Windows Server 2003 migrations
Fix coming for bug that makes Kerberos croak when you run two domain controllers
Cisco says network virtualisation won't pay off everywhere
Another sign of strain in the Borg/VMware relationship?
prev story

Whitepapers

7 Elements of Radically Simple OS Migration
Avoid the typical headaches of OS migration during your next project by learning about 7 elements of radically simple OS migration.
Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
Consolidation: The Foundation for IT Business Transformation
In this whitepaper learn how effective consolidation of IT and business resources can enable multiple, meaningful business benefits.
Solving today's distributed Big Data backup challenges
Enable IT efficiency and allow a firm to access and reuse corporate information for competitive advantage, ultimately changing business outcomes.
A new approach to endpoint data protection
What is the best way to ensure comprehensive visibility, management, and control of information on both company-owned and employee-owned devices?