Feeds

IBM to commercialize chip-design cluster tools

You'll be able to provision a cluster 'in minutes'

The essential guide to IT transformation

IBM is to commercialize the management software behind the chip design cluster used by more than 3,000 engineers to create its homegrown server processors.

The offering comes out in July and sports a set of internally developed HPC cluster management tools that provide cloud-like clustering and system pooling capabilities. These do not resort to installing hypervisors on the servers to virtualize workloads and thereby make them more fluid.

HPC customers are somewhat allergic to server virtualization because of the high overhead imposed on I/O at this point in the history of hypervisors.

Details are sketchy, but IBM's own cluster of clusters, which runs its electronic design automation (EDA) software for designing its Power7 and System z processors, was the inspiration for Big Blue's "HPC Clouds".

That EDA super-cluster, comprising multiple clusters of X64 and Power servers, was stitched together using a new resource manager that Big Blue's techies cooked up to allow multiple simulation and analytics jobs to better share work across those individual clusters. This tool is to be sold as the HPC Management Suite for Cloud.

IBM says that because it created a super-cluster out of its many design systems, it could distribute and prioritize work related to the design of the Power7 processor, launched last year, in such a way that it cut chip developer costs in half and reduced the design cycle for the chip by six months.

IBM is keeping mum about exactly what this HPC management tool is and how it works, but sources at the company tell El Reg that the tool will be priced on a per node basis, starting at around $700 per node, with variations depending on the node characteristics.

It is not clear if the HPC cluster manager will be aware of GPU coprocessors, but one would hope so given the growing popularity of these cheap flops and the fact that alternative tools from Platform Computing, Adaptive Computing, and Bright Computing all have hooks into Nvidia Tesla GPU clusters.

IBM says that the cluster provisioning tool at the "HPC cloud" will scale to thousands of server nodes and will be able to provision a cluster "in the order of minutes". That's hard to believe, so it will be interesting to see this proven.

Customers who want to use the HPC Management Suite for Cloud will be able to use existing IBM System x, BladeCenter, and Power Systems servers as well as x64-based servers made by other vendors, the company says.

For customers starting from scratch, IBM will suggest that customers opt for Intelligent Clusters, which are pre-fabbed x64-based clusters in the System x rack, BladeCenter blade, and iDataPlex hybrid server lines including server and storage switches from a variety of vendors and IBM storage configured up with IBM's General Parallel File System (GPFS).

The server nodes can be tooled up with Red Hat Enterprise Linux 5, SUSE Linux Enterprise Server 11, or Microsoft Windows HPC Server 2008.

Prior to this announcement, the Intelligent Cluster setups can be configured with IBM's xCAT cluster management tools or Adaptive Computing's various Moab suites.

It is not clear how the forthcoming HPC Management Suite for Cloud plugs into, augments, or supersedes these cluster management tools.

This being IBM, there is a services component to the HPC cloud, and in this case, Big Blue peddles a "quick start" service to help HPC shops install, configure, and optimize their clusters or glue together multiple clusters.

IBM also plans to offer industry-specific versions of its HPC cloud, starting with the Engineering Solution for Cloud, based on IBM's own EDA clusters and aimed at electronics, auto, and aerospace manufacturers.

This engineering "cloud" will include a set of Rational application development tools from IBM as well as ISV apps from Ansys, Cadence, EXA, and Magma. ®

Boost IT visibility and business value

More from The Register

next story
Pay to play: The hidden cost of software defined everything
Enter credit card details if you want that system you bought to actually be useful
Shoot-em-up: Sony Online Entertainment hit by 'large scale DDoS attack'
Games disrupted as firm struggles to control network
HP busts out new ProLiant Gen9 servers
Think those are cool? Wait till you get a load of our racks
Silicon Valley jolted by magnitude 6.1 quake – its biggest in 25 years
Did the earth move for you at VMworld – oh, OK. It just did. A lot
VMware's high-wire balancing act: EVO might drag us ALL down
Get it right, EMC, or there'll be STORAGE CIVIL WAR. Mark my words
Forrester says it's time to give up on physical storage arrays
The physical/virtual storage tipping point may just have arrived
prev story

Whitepapers

Top 10 endpoint backup mistakes
Avoid the ten endpoint backup mistakes to ensure that your critical corporate data is protected and end user productivity is improved.
Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
Backing up distributed data
Eliminating the redundant use of bandwidth and storage capacity and application consolidation in the modern data center.
The essential guide to IT transformation
ServiceNow discusses three IT transformations that can help CIOs automate IT services to transform IT and the enterprise
Next gen security for virtualised datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.