Feeds

HPC 2.0: The monster mash-up

It's storage size and speed that counts...

Mobile application security vulnerability report

Blog When Big Data gets big, data centers should get nervous Part 3

In the first two parts of this trilogy, we set the stage with a discussion of how HPC and the Big Data trend, combined with the increasing use of business analytics, are presenting existing data centers with some pretty big challenges. The first article is here, the second is here, and you’re reading the third (and final) story.

The genesis of these articles was an IBM HPC analyst event in New York last month and, more specifically, the discussion of these topics by IBM’s VP of Deep Computing, Dave Turek.

In the second installment, we covered the way that analytics processes (and HPC too) often consist of several different workloads, each of which uses a massive amount of data and relies on output from some other application.

Traditionally, this would mean moving data to the systems that will be doing the processing. But today’s disk and network technology, although fast, is still to slow to feed the analytics beast at a high enough rate to meet depth-of-analysis and time-to-solution requirements.

So if the usual data center workflow arrangements won’t work, what will? Turek spoke about a ‘Workflow Optimized System’ which, to me, looks and sounds a lot like either a big system or a cluster with virtualization in a MapReduce wrapper. In a workflow optimized infrastructure, mass moves of data from the network and disk storage is minimized to the greatest extent possible.

Data transfers take place over the system interconnect, which could be 40GB/sec to 100GB/sec, or even 400GB/sec at the high end. Even at 40GB/sec, you’d need 320 spinning hard drives running at 128 MB/ sec to equal the transfer speed of the system interconnect – which also assumes you have a network that can provide the bandwidth.

Combining different workloads on the same system or cluster presents some challenges. What each workload needs in terms of hardware resources varies.

Traditional HPC typically needs fast interconnect and high core counts – disk access speed and network bandwidth aren’t all that important. But velocity analytics depends on high network flows, a fast interconnect, and storage – not so much on core count. With volume analytics, storage size and speed and core counts are most important.

In order to maximise processing efficiency, a workflow optimized analytics infrastructure will be able to adapt on the fly to bring the right set of processing resources to the workload. While you can get this kind of resource granularity and workload management in a large SMP system – like a mainframe or commercial Unix box – these capabilities aren’t quite there yet for clusters.

There’s a lot of work happening along these lines, and some products out there right now that get us part of the way there (like ScaleMP’s memory aggregation), so we are seeing some progress. New system architectures with extensible bus mechanisms should take us farther down the road. ®

The Power of One Brief: Top reasons to choose HP BladeSystem

More from The Register

next story
BBC goes offline in MASSIVE COCKUP: Stephen Fry partly muzzled
Auntie tight-lipped as major outage rolls on
iPad? More like iFAD: We reveal why Apple fell into IBM's arms
But never fear fanbois, you're still lapping up iPhones, Macs
Amazon Reveals One Weird Trick: A Loss On Almost $20bn In Sales
Investors really hate it: Share price plunge as growth SLOWS in key AWS division
Bose says today is F*** With Dre Day: Beats sued in patent battle
Music gear giant seeks some of that sweet, sweet Apple pie
There's NOTHING on TV in Europe – American video DOMINATES
Even France's mega subsidies don't stop US content onslaught
You! Pirate! Stop pirating, or we shall admonish you politely. Repeatedly, if necessary
And we shall go about telling people you smell. No, not really
Too many IT conferences to cover? MICROSOFT to the RESCUE!
Yet more word of cuts emerges from Redmond
Chips are down at Broadcom: Thousands of workers laid off
Cellphone baseband device biz shuttered
Twitch rich as Google flicks $1bn hitch switch, claims snitch
Gameplay streaming biz and search king refuse to deny fresh gobble rumors
prev story

Whitepapers

Designing a Defense for Mobile Applications
Learn about the various considerations for defending mobile applications - from the application architecture itself to the myriad testing technologies.
Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
Top 8 considerations to enable and simplify mobility
In this whitepaper learn how to successfully add mobile capabilities simply and cost effectively.
Seven Steps to Software Security
Seven practical steps you can begin to take today to secure your applications and prevent the damages a successful cyber-attack can cause.
Boost IT visibility and business value
How building a great service catalog relieves pressure points and demonstrates the value of IT service management.