Putting the Square Kilometre Array on a Cloud
Should be enough to get free Amazon shipping, anyway
The Southern Hemisphere has less light pollution and radio interference than the North. And so South Africa and Western Australia are on the short list for the central core location, from where an array of receptors will snake out across the Indian Ocean islands up to 3,000km away.
A consortium of 67 organisations in 20 countries is working with industry vendors on the design. Construction is budgeted at $1.5bn and kicks off in 2016.
The SKA should be fully operational in 2024 and will be 10,000 times more sensitive than the best radio telescope today. The Big Questions it will help answer include the origins of the universe; the nature of Dark Matter and Dark Energy (which kind of creeps me out); and if Einstein was right with his General Relativity Theory – we’ll find out if space is truly bendy.
Astronomers and scientists will also look around to see what locations might support life and try to figure out where magnetism comes from. (And yes, the answer is more complicated than “magnets").
The SKA will generate huge volumes of data. The consortium is working on a test site that’s one per cent the size of the full-on SKA and will spit out raw data at 60 Terabits/sec. After some level of correlation and other processing, the rate settles down to 1GB/sec of data to be stored and analyzed.
In operation, SKA will generate 1TB/sec of pre-processed data, which would equal an Exabyte of data every 13 days. Even with much more aggregation, we’re talking about Exabytes of data.
According to a source on the web (so I know that it’s true), five exabytes is big enough to log every word ever spoken by human beings. I think this also would include short words like ‘a’ and ‘an’, but I’m not sure about grunts or exclamations. Either way, it’s a lot.
So how do you process, transport, and store this much data? According to the authors of SKA Memo 134, Cloud Computing and the Square Kilometre Array, cloud storage/computing might handle the load.
They put forward some scenarios using Amazon EC2: the largest was storage of 1PB of data and continuous use of 1,000 compute nodes. The price tag is $225,000 per month plus an annual payment of $455,000 - which totals a little over $3.1m per year.
They note that they might be able to negotiate a volume discount, which could reduce costs significantly. I’d also make them throw in free Amazon Prime shipping, free media streaming, and early access to super-saver items before the general public sees them.
Plugging into the Grid
On the compute side, the authors talk about potentially using a SETI@Home or Folding@Home model to carry some of the load. According to their calculations, the current capacity available from folks volunteering their spare cycles is around 5PB (petabytes). If it were a single system, that would put this in second place on the Top500 behind the 8PB Japanese Super K.
Something that captured my imagination was their speculation that the unused or underutilized capacity on multi-core, broadband-attached PCs is something like 100 times the combined processing power of the entire Top500 list.
What would be a fair price for that capacity? Perhaps the number is somewhere north of the cost of data transport plus the incremental cost of electricity, which is still about ten times cheaper than any other processing available today.
This is an interesting concept - maybe a forerunner of future high performance computing (HPC). Would you sign up for free high-speed internet access in exchange for keeping your computer on all night and letting them scavenge your idle cycles? There would be no advertising on your screen, and they wouldn’t be tracking your movements and selling them to advertisers.
If they can negotiate low enough rates from the providers, the numbers might just work. It’s a win-win: the user gets free bandwidth, and the sponsoring organization gets their computing tasks done at much lower cost. ®
Sponsored: Optimizing the hybrid cloud