Scottish uni slams on the Accelerator to boost UK boffinry
Just don't call new supercomputer system a cloud
The boffins who run two big supercomputers on behalf of the UK government and academic research institutions - as well as one smaller machine aimed at industrial users - have converted those machines into an HPC utility called Accelerator.
And they want you to buy core-hours on their machines instead of wasting your money installing your own supercomputers.
The Accelerator service offers compute capacity on demand on the Hector Cray XE6 supercomputer installed at the Edinburgh Parallel Computing Centre (EPCC) at the University of Edinburgh, which is one of the most powerful boxes on the planet. Accelerator also offers compute cycles on a shiny new BlueGene/Q machine, which has no nickname,that was installed last year at EPCC at the same time as the upgrade to the Hector system.
The phase 3 upgrade to the Hector system is a Cray XE6 system using the "Gemini" interconnect and the "Interlagos" Opteron 6200 processor running 2.3GHz from Advanced Micro Devices. Each socket has 16GB of memory, and the system has 30 cabinets with a total of 90,112 cores and 88TB of main memory. The peak theoretical performance of Hector is 829 teraflops and on the Linpack Fortran benchmark test it can deliver about 660.2 teraflops of number-crunching oomph. It is precisely the kind of machine that most commercial companies and even many governments cannot afford to build. The machine comes with Cray's variant of SUSE Linux and its compiler stack. The Gemini interconnect implements a 3D torus of the nodes so they can share work.
The Hector Cray XE6 supercomputer
Ditto for the BG/Q box, which is an IBM BlueGene/Q with 98,304 cores that has a peak performance of 1.26 petaflops and actually delivers 1.04 petaflops where the Fortran rubber hits the Power A2 processor road. Those cores run at 1.6GHz, by the way, and if you do the math, the amount of flops per core on the Power A2 chip is actually 44 per cent higher than on the Opteron 6200 chips, which run 44 per cent higher. So if you think architecture doesn't matter, and that you have to get a machine tuned for specifically workloads as well, you are wrong.
Anyway, the BQ/Q machine runs a modified version of Red Hat Enterprise Linux on the head and login nodes and has a trimmed down Linux kernel running on the Power A2 compute nodes. POSIX, MPI, and OpenMP programming models are supported on this box, which uses IBM's own XL Fortran, C, and C++ compilers. The BlueGene/Q machine has a 5D torus interconnect created by IBM (and based research done at Columbia University in the 1990s for a machine designed expressly to run quantum chromodynamics applications.) It has software installed to do computational fluid dynamics and molecular dynamics, so if that is you, then George Graham, business development manager at EPCC and who is in charge of the Accelerator program, wants to speak to you.
The third machine in the Accelerator utility is called Indy, which is a modest cluster of two dozen four-socket Opteron 6200 servers with a total of 1,536 cores and 6.5TB of memory; a few head nodes are tossed in to do compiling and to run the Platform cluster management and job scheduling tools. Eventually, if industrial customers require it, each node can have up to two Nvidia Tesla GPU coprocessors added to it. This machine, Graham tells El Reg, is not funded by the UK government and has been set up explicitly for use by industry rather than government and academic researchers.
The BlueGene/Q super at EPCC
The Accelerator service is a utility, so if you want to use it, it costs money.
"It's definitely not free, and it is not all that cloudy, to be honest," says Graham. So don't think the Accelerator service us trying to replicate the easy-on, easy off experience of Amazon Web Services' HPC instances. But what Accelerator can offer is access to a real supercomputer with a real interconnect, not some 10 Gigabit Ethernet pipes. And it can do so at an attractive price that is certainly lower than trying to buy your own XE6 or Blue/Gene/Q system.
So far, EPCC has lined up a few customers, including Oxford Nanapore Technologies, FIOS Genomics, Deep Casing Tools, Rock Solid Imaging, and Vattenfall AB, with a bunch of other firms currently doing pilots and benchmarks. Vattenfall, for instance, used the Hector Cray machine to model the entire Lillgrund offshore wind farm to help it understand how best to expand it.
To rent time on the Hector machine through the Accelerator service, it costs 10 pence per core-hour, not including VAT charges. You have to buy in 32 core increments, so the base price is £3.20, and Graham says that if you bought 32 nodes or 1,024 cores, that would cost you £102.40 and it would run a simulation in an hour that would take around 40 days on your desktop machine (if it isn't too puny, that is). The Indy machine costs 5.1 pence per core-hour to rent. Pricing on the BlueGene/Q machine has not been set yet, but it will probably be on the same order of magnitude as the Hector box, says Graham, and if that is the case, and you have workloads that scale well on BlueGene/Q, you could save a bundle given how those Power A2 cores deliver more flops on Linpack each than do the Opteron cores in Hector.
What you need to do, of course, is to run tests of your code on both Hector and BlueGene/Q, see where they run best, and do a little price/performance analysis to reckon which machine should run the job. At least with Accelerator you have a choice. On Amazon EC2, you get what they give you. And while I am thinking about it, you should price the Accelerator service against the Amazon EC2 HPC images as well and make sure you pick the right service, at the right price, for your workloads.
The prices cited above are for on-demand access to Hector, BlueGene/Q, and Indy at the EPCC, which has created its own job queuing program to manage the distribution of work across the machines. Generally speaking, Hector has around 25 per cent of its capacity free at any given time, according to Graham, and the BlueGene/Q and Indy machines are so shiny that they don't have much work yet. So you might be able to get a real deal there if you can talk fast. Over time, UK government agencies and academic researchers will load them up.
If you want to reserve a chunk of any of the machines in the Accelerator service and run your work outside of the job queue on a priority basis, that is also an option. But you have to pay a 50 per cent premium over the prices listed above for that privilege. ®
Re: Didn't SUN try this?
Unless I'm mistaken the Edinburgh University "Eddie" cluster, for those of us that can run isolated parallel work, rather than heavily communicating work, uses the Sun GRID Engine or something similar for scheduling jobs, so the Sun GRID work wasn't useless.
Ok, fess up, who in here is just in the larval stage or is one of those 12-year old "haxxors" one hears about?
"but buying processing power on educational/governmental supers"
Agree entirely, we had ( I'm now retired) several in-house Linux clusters of 1024 & 2048 nodes for computationally intensive jobs but would also buy time on more powerful systems. It's the norm in many areas of science.