This article is more than 1 year old

Cambridge’s HPC-as-a-service for boffins, big and small

However, a ‘step change in data storage’ needed

Compute power is willing, but the storage is weak

“Smaller firms go to maybe an integrator where they bought the hardware and the software for the workstation before and they say: ‘Can you help me get to the next level of performance?’ We enable them to offer a level of integration between hardware and software to give them what’s more like an appliance, a machine that’s already loaded with a small cluster. That’s for the customers who really want to keep things in house,” according to Gillich.

“Then there is the fact those whose workloads vary all the time and they don’t want to buy a machine for their highest rate of work. They often work with cloud service providers to build them every kind of cloud solution (private, public or hybrid model) where they can actually bring their workload and be flexible.”

But while the compute power is willing, there’s still the age-old problem of storage. Massive data projects like the LHC throw away reams of information because they just can’t keep it all.

The data has to be analysed for valuable content and anything that’s not up to scratch promptly gets deleted to make room for the next batch of experimental results.

Cambridge’s Calleja has a simple rule for the projects using his HPC facility – you get what you pay for.

“Storage has to be steered by the project, we cannot judge the value of data or comment on what they can or cannot store. The deciding factor here is one of budget; the user wants to store a certain amount of data, I tell him how much it costs and he sees if he can afford it or not,” he laughed.

“We are seeing an explosion of data and the rate of growth of data is larger than the rate of growth of compute. The demand growth in data is also larger than the growth in budgets, therefore people have a problem.”

But Calleja reckons the solution to that problem is already here.

“This requires a step change in technology, we need a step change in data storage, performance and capacity per pound. But this step change is easily achievable because storage technologies have not, in the main, undergone the commoditisation that compute technologies have undergone."

“In the mid to late 1990s, HPC compute technology went through a commoditisation where we moved away from proprietary compute systems to commodity clusters. This has increased the price performance of compute by a factor of a hundred. Storage has yet to undergo that transition,” he said.

“The step change is here, it’s just not mainstream yet because people still cling onto supply storage solutions. People are afraid that if they lose storage, they’ll get sacked.

“But it’s not sustainable, this gap between demand and budget is now becoming so large that the only way it can be dealt with is commoditisation. Commodity solutions are going to become commonplace and vendors are going to have to put up with 20 per cent margins on their storage sales.

“Once that genie is out of the bottle, you can’t put it back in again." Over to you, Cambridge.®

More about

TIP US OFF

Send us news


Other stories you might like