Feeds

HPC 2.0: The monster mash-up

It's storage size and speed that counts...

Next gen security for virtualised datacentres

Blog When Big Data gets big, data centers should get nervous Part 3

In the first two parts of this trilogy, we set the stage with a discussion of how HPC and the Big Data trend, combined with the increasing use of business analytics, are presenting existing data centers with some pretty big challenges. The first article is here, the second is here, and you’re reading the third (and final) story.

The genesis of these articles was an IBM HPC analyst event in New York last month and, more specifically, the discussion of these topics by IBM’s VP of Deep Computing, Dave Turek.

In the second installment, we covered the way that analytics processes (and HPC too) often consist of several different workloads, each of which uses a massive amount of data and relies on output from some other application.

Traditionally, this would mean moving data to the systems that will be doing the processing. But today’s disk and network technology, although fast, is still to slow to feed the analytics beast at a high enough rate to meet depth-of-analysis and time-to-solution requirements.

So if the usual data center workflow arrangements won’t work, what will? Turek spoke about a ‘Workflow Optimized System’ which, to me, looks and sounds a lot like either a big system or a cluster with virtualization in a MapReduce wrapper. In a workflow optimized infrastructure, mass moves of data from the network and disk storage is minimized to the greatest extent possible.

Data transfers take place over the system interconnect, which could be 40GB/sec to 100GB/sec, or even 400GB/sec at the high end. Even at 40GB/sec, you’d need 320 spinning hard drives running at 128 MB/ sec to equal the transfer speed of the system interconnect – which also assumes you have a network that can provide the bandwidth.

Combining different workloads on the same system or cluster presents some challenges. What each workload needs in terms of hardware resources varies.

Traditional HPC typically needs fast interconnect and high core counts – disk access speed and network bandwidth aren’t all that important. But velocity analytics depends on high network flows, a fast interconnect, and storage – not so much on core count. With volume analytics, storage size and speed and core counts are most important.

In order to maximise processing efficiency, a workflow optimized analytics infrastructure will be able to adapt on the fly to bring the right set of processing resources to the workload. While you can get this kind of resource granularity and workload management in a large SMP system – like a mainframe or commercial Unix box – these capabilities aren’t quite there yet for clusters.

There’s a lot of work happening along these lines, and some products out there right now that get us part of the way there (like ScaleMP’s memory aggregation), so we are seeing some progress. New system architectures with extensible bus mechanisms should take us farther down the road. ®

Secure remote control for conventional and virtual desktops

More from The Register

next story
6 Obvious Reasons Why Facebook Will Ban This Article (Thank God)
Clampdown on clickbait ... and El Reg is OK with this
No, thank you. I will not code for the Caliphate
Some assignments, even the Bongster decline must
Kaspersky backpedals on 'done nothing wrong, nothing to fear' blather
Founder (and internet passport fan) now says privacy is precious
TROLL SLAYER Google grabs $1.3 MEEELLION in patent counter-suit
Chocolate Factory hits back at firm for suing customers
Mozilla's 'Tiles' ads debut in new Firefox nightlies
You can try turning them off and on again
Sit tight, fanbois. Apple's '$400' wearable release slips into early 2015
Sources: time to put in plenty of clock-watching for' iWatch
Ex-IBM CEO John Akers dies at 79
An era disrupted by the advent of the PC
prev story

Whitepapers

5 things you didn’t know about cloud backup
IT departments are embracing cloud backup, but there’s a lot you need to know before choosing a service provider. Learn all the critical things you need to know.
Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
Backing up Big Data
Solving backup challenges and “protect everything from everywhere,” as we move into the era of big data management and the adoption of BYOD.
Consolidation: The Foundation for IT Business Transformation
In this whitepaper learn how effective consolidation of IT and business resources can enable multiple, meaningful business benefits.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?