Feeds

HPC 2.0: The monster mash-up

It's storage size and speed that counts...

Security for virtualized datacentres

Blog When Big Data gets big, data centers should get nervous Part 3

In the first two parts of this trilogy, we set the stage with a discussion of how HPC and the Big Data trend, combined with the increasing use of business analytics, are presenting existing data centers with some pretty big challenges. The first article is here, the second is here, and you’re reading the third (and final) story.

The genesis of these articles was an IBM HPC analyst event in New York last month and, more specifically, the discussion of these topics by IBM’s VP of Deep Computing, Dave Turek.

In the second installment, we covered the way that analytics processes (and HPC too) often consist of several different workloads, each of which uses a massive amount of data and relies on output from some other application.

Traditionally, this would mean moving data to the systems that will be doing the processing. But today’s disk and network technology, although fast, is still to slow to feed the analytics beast at a high enough rate to meet depth-of-analysis and time-to-solution requirements.

So if the usual data center workflow arrangements won’t work, what will? Turek spoke about a ‘Workflow Optimized System’ which, to me, looks and sounds a lot like either a big system or a cluster with virtualization in a MapReduce wrapper. In a workflow optimized infrastructure, mass moves of data from the network and disk storage is minimized to the greatest extent possible.

Data transfers take place over the system interconnect, which could be 40GB/sec to 100GB/sec, or even 400GB/sec at the high end. Even at 40GB/sec, you’d need 320 spinning hard drives running at 128 MB/ sec to equal the transfer speed of the system interconnect – which also assumes you have a network that can provide the bandwidth.

Combining different workloads on the same system or cluster presents some challenges. What each workload needs in terms of hardware resources varies.

Traditional HPC typically needs fast interconnect and high core counts – disk access speed and network bandwidth aren’t all that important. But velocity analytics depends on high network flows, a fast interconnect, and storage – not so much on core count. With volume analytics, storage size and speed and core counts are most important.

In order to maximise processing efficiency, a workflow optimized analytics infrastructure will be able to adapt on the fly to bring the right set of processing resources to the workload. While you can get this kind of resource granularity and workload management in a large SMP system – like a mainframe or commercial Unix box – these capabilities aren’t quite there yet for clusters.

There’s a lot of work happening along these lines, and some products out there right now that get us part of the way there (like ScaleMP’s memory aggregation), so we are seeing some progress. New system architectures with extensible bus mechanisms should take us farther down the road. ®

Security and trust: The backbone of doing business over the internet

More from The Register

next story
Phones 4u slips into administration after EE cuts ties with Brit mobe retailer
More than 5,500 jobs could be axed if rescue mission fails
JINGS! Microsoft Bing called Scots indyref RIGHT!
Redmond sporran metrics get one in the ten ring
Driving with an Apple Watch could land you with a £100 FINE
Bad news for tech-addicted fanbois behind the wheel
Murdoch to Europe: Inflict MORE PAIN on Google, please
'Platform for piracy' must be punished, or it'll kill us in FIVE YEARS
Phones 4u website DIES as wounded mobe retailer struggles to stay above water
Founder blames 'ruthless network partners' for implosion
Sony says year's losses will be FOUR TIMES DEEPER than thought
Losses of more than $2 BILLION loom over troubled Japanese corp
Radio hams can encrypt, in emergencies, says Ofcom
Consultation promises new spectrum and hints at relaxed licence conditions
Why Oracle CEO Larry Ellison had to go ... Except he hasn't
Silicon Valley's veteran seadog in piratical Putin impression
Big Content Australia just blew a big hole in its credibility
AHEDA's research on average content prices did not expose methodology, so appears less than rigourous
prev story

Whitepapers

Secure remote control for conventional and virtual desktops
Balancing user privacy and privileged access, in accordance with compliance frameworks and legislation. Evaluating any potential remote control choice.
WIN a very cool portable ZX Spectrum
Win a one-off portable Spectrum built by legendary hardware hacker Ben Heck
Storage capacity and performance optimization at Mizuno USA
Mizuno USA turn to Tegile storage technology to solve both their SAN and backup issues.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
The next step in data security
With recent increased privacy concerns and computers becoming more powerful, the chance of hackers being able to crack smaller-sized RSA keys increases.