Large Hadron Collider team flicks switch on Xeon grid
But hurry up with octo? We switch on tomorrow
CERN today unveiled the upgraded grid that will support the Large Hadron Collider when the titanic particle-punisher finally kicks back into life.
Sverre Jarp, CTO at CERN OpenLab supporting the Large Hadron Collider (LHC) buried beneath the Franco-Swiss border outside Geneva, described the network, powered by Intel Xeons, as "the largest grid service in the world".
Jarp certainly has a big job on his hands. The LHC is to resume operations "within days" following its widely-publicised technical mishaps last year. Once in operation, the mighty proton-punisher will produce colossal amounts of data to be processed.
Briefing reporters at the accelerator facility today, Jarp says that the entire grid, distributed around the world, musters 160,000 cores and measures its storage in tens of petabytes.
"And we are only at the beginning of the demand," he said. "We expect to move into exabytes as the project goes on."
This is based on experience with CERN's previous LEP atomsmasher, during which the IT department moved from a mainframe to RISC servers and then to PC servers.
"We increased performance by 1000x, and the physicists used every cycle of it," says Jarp. "We expect to see the same kind of demand over time with the LHC. We really hammer our equipment, 24 hours a day, seven days a week."
Some 39,000 of the cores are physically present at CERN's Alpine headquarters, and this presents a major issue - that of power. The CERN computer building can supply only 2.9 megawatts of juice apart from cooling and ventilation. This is tiny compared to the amounts of energy expended in the LHC's mighty accelerator ring, but apparently it's a hard limit.
"For us, energy efficiency is vital," says Jarp.
Intel is keen to persuade CIOs to refresh their server architecture at the moment, claiming it can deliver 92 per cent power savings in an upgrade focused on energy efficiency (as opposed to one also aiming for more performance). This, according to Intel EMEA boss Christian Morales - also present at this morning's briefing at CERN - means that new Xeons can pay for themselves in a matter of 8 months.
That may or may not be the case for any individual customer, but Jarp is very pleased with the results in CERN's case. He and his people get early access to all of Intel's latest kit, and he says that they have tested out all the 5500 Xeons, getting best results for their purposes with 5520s, which were 36 per cent more efficient than the 5410 Harpertowns previously used.
"We found an old Irwindale from 2005 sitting in a corner and we checked against that too," adds the OpenLab CTO. "Moving up from then to now, running Linux - Linux is our OS of choice - we saw 4x performance increase from the cores, and as much as 6x with the use of symmetric multi-threading."
Jarp says that LHC analysis lends itself to parallelism and multithreading, but that this means more memory - hence more power use - and CERN has to use this tech judiciously.
Then there's improved energy efficiency, very important in CERN's own computer centre as the mission there is essentially to achieve as much computing as possible for a fixed maximum amount of power.
Overall, Jarp says, including cores, threading and energy efficiency, CERN has achieved "9 to 10 times" as much work for a given amount of energy by moving from Intel's 2005 offering to the 5500s.
"We are the only ones [here at CERN] who have profited from the delays [to the LHC]," he jokes. Even so he expects the performance boost to be barely adequate to the tasks ahead, as the mighty collider begins to pour out data in the coming months.
"We need to push companies like Intel," he says, already looking to the future. Intel's Morales was speaking today mainly of two socket systems each of six cores, but it was clear that Jarp - anticipating surging demand from data-glutted boffins - wished Chipzilla would get its skates on.
"We'd like to see octo to begin with," said the CTO. "Then maybe a mix of big and small cores."
CERN's entire budget is only a fraction of what Intel spends on R&D, and Morales spent much of the morning expressing gratitude for the phenomenon of the Web - a CERN invention - and attendant exponential growth in consumer chip demand, so that may not be an unreasonable demand. ®
They use all of it
because if they didn't, someone would notice and they'd never get the funding for the next round of upgrades.
The optimal state for the modern physicist is always having just a bit more data than you have time to analyse.
Tim Berners Lee worked at CERN while inventing HTML.
Shoot the commenter, not the reporter.
possibly their reasoning for not doing so was along the line:
possibility that something goes wrong enough to destroy the computer center 1km away: 1e-3*
possibility that something goes just wrong enough to destroy the computer center 1km away but not the computer center 6000km away (through the earth): 1e-30*
cost of additional compter center: factor 2
(cost of transmission line able to transmit all experiment data ahead of the wave of destruction: huge?)
*numbers are obviously made up
Servers on the other side of the planet?
Errm, that's what Grid computing does. The compute resource at CERN is only a small fraction of what we use:
Could they built a server and burry it in the other side of the planet , just in case anything goes wrong?