Feeds

Grid computing meets data flow challenge

A significant milestone

  • alert
  • submit to reddit

Internet Security Threat Report 2014

Scientists at CERN announced yesterday that eight major computing centres have managed to sustain an average continuous data flow of 600 megabytes per second for 10 days. It is a significant milestone for scientific grid computing.

The total volume of data transmitted between CERN, the European Organisation for Nuclear Research near Geneva, and seven sites in the US and Europe - amounting to 500 terabytes - would take about 250 years to download using a typical 512-kilobit per second household broadband connection.

In basic terms, grid computing can be described as a network of computers and data storage systems, brought together to share computing power. Where a computer is not being used, or is using only a fraction of its power, the grid will allow that power to be used by someone else.

The concept differs from the World Wide Web, which only enables communication through browsers, because it actually allows access to computer resources. It is also different from peer-to-peer computing, which enables file-sharing between two users, because it allows sharing of resources among many, not just two.

The potential of computer grids is enormous and when the concept becomes mainstream it holds the promise of transforming the computer power available to the individual. At present, a computer user is restricted by the power of his own computer. When the grid comes on line there will be no restrictions: the cheapest, oldest model will have access to the computing resources of millions of other computers worldwide.

Scientists at CERN are collaborating with scientists worldwide in the creation of what is hoped will be the world's largest computer grid, in order to analyse the massive volume of data that will be produced when CERN's latest and largest ever particle accelerator (known as the Large Hadron Collider, or LHC) becomes operational in 2007.

The exercise completed yesterday was the second in a series of four service challenges designed to ramp up to the level of computing capacity, reliability and ease of use that will be required by the worldwide community of over 6000 scientists working on the LHC experiments.

Other participants included Brookhaven National Laboratory and Fermi National Accelerator Laboratory (Fermilab) in the US, Forschungszentrum Karlsruhe in Germany, CCIN2P3 in France, INFN-CNAF in Italy, SARA/NIKHEF in the Netherlands and Rutherford Appleton Laboratory in the UK.

"This service challenge is a key step on the way to managing the torrents of data anticipated from the LHC," said Jamie Shiers, manager of the service challenges at CERN. "When the LHC starts operating in 2007, it will be the most data-intensive physics instrument on the planet, producing more than 1500 megabytes of data every second for over a decade."

Fermilab Computing Division head Vicky White welcomed the results of the service challenge.

"High energy physicists have been transmitting large amounts of data around the world for years," she said. "But this has usually been in relatively brief bursts and between two sites. Sustaining such high rates of data for days on end to multiple sites is a breakthrough, and augurs well for achieving the ultimate goals of LHC computing."

In fact the test exceeded expectations by sustaining roughly one-third of the ultimate data rate from the LHC, and reaching peak rates of over 800 megabytes per second.

The next service challenge, due to start in the summer, will extend to many other computing centres and aim at a three-month period of stable operations. That challenge will allow many of the scientists involved to test their computing models for handling and analysing the data from the LHC experiments.

Copyright © 2005, OUT-LAW.com

Related stories

Sun opens processor auction house
Dutch turn town into supercomputer
Globus Consortium takes grid computing to the office

Internet Security Threat Report 2014

More from The Register

next story
Docker's app containers are coming to Windows Server, says Microsoft
MS chases app deployment speeds already enjoyed by Linux devs
IBM storage revenues sink: 'We are disappointed,' says CEO
Time to put the storage biz up for sale?
'Hmm, why CAN'T I run a water pipe through that rack of media servers?'
Leaving Las Vegas for Armenia kludging and Dubai dune bashing
Facebook slurps 'paste sites' for STOLEN passwords, sprinkles on hash and salt
Zuck's ad empire DOESN'T see details in plain text. Phew!
SDI wars: WTF is software defined infrastructure?
This time we play for ALL the marbles
Windows 10: Forget Cloudobile, put Security and Privacy First
But - dammit - It would be insane to say 'don't collect, because NSA'
Oracle hires former SAP exec for cloudy push
'We know Larry said cloud was gibberish, and insane, and idiotic, but...'
prev story

Whitepapers

Forging a new future with identity relationship management
Learn about ForgeRock's next generation IRM platform and how it is designed to empower CEOS's and enterprises to engage with consumers.
Cloud and hybrid-cloud data protection for VMware
Learn how quick and easy it is to configure backups and perform restores for VMware environments.
Three 1TB solid state scorchers up for grabs
Big SSDs can be expensive but think big and think free because you could be the lucky winner of one of three 1TB Samsung SSD 840 EVO drives that we’re giving away worth over £300 apiece.
Reg Reader Research: SaaS based Email and Office Productivity Tools
Read this Reg reader report which provides advice and guidance for SMBs towards the use of SaaS based email and Office productivity tools.
Security for virtualized datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.