Platform revamps grid control tools
'GPUs, I am your master'
Grid computing pioneer Platform Computing is taking the wraps off an updated and more integrated edition of its cluster management tools for small and mid-sized clusters, days ahead of the SC10 supercomputing conference in New Orleans.
Platform HPC 2.1 is based on the Load Sharing Facility workload scheduler that first came to market in 1992 and that is arguably the first decent commercial-grade tool for playing traffic cop on HPC clusters as jobs jockey for resources. Platform HPC is based on a more recent version of LSF Workgroup Edition, of course, and also includes Cluster Manager (formerly known as Open Cluster Stack 5), and a message-passing interface (MPI) parallel computing stack that includes MPI code that Platform acquired separately from Scali and Hewlett-Packard.
The bundle also includes Platform RTM (Report, Track, Monitor) and a new Web-based management portal that spans all of these modules. Perhaps most significantly, the update includes a single installer that can get all of these components on the management nodes of clusters quickly and consistently.
The original Platform HPC bundle was aimed at clusters with 32 server nodes or less and was launched in June 2009; it was originally known as Workgroup Manager. The following April, the company rolled up an Enterprise Edition, which added in the full-scale version of LSF as well as the Infrastructure Sharing Facility Adaptive Cluster, a program for scheduling the bare-metal provisioning of Linux or Windows HPC Server 2008 instances on server nodes ahead of the LSF job scheduler. (ISF Adaptive Cluster went into beta in June 2009 and started shipping in November of that year). Platform Computing also had a cheapo academic bundle aimed at educational institutions and researchers to get them hooked on their own clusters.
With Platform HPC 2.1 announced today, the Workgroup Edition is still aimed at customers with 32 nodes or fewer in their clusters, while the Enterprise Edition is for machines with more than 32 nodes. The Workgroup Edition costs $640 per server node for a three-year license with support, while the Enterprise Edition costs $1,300 per server node for the three-year term. The software only runs on x64 servers, and it can only provision Linux or Windows HPC Server 2008 instances on the server nodes. Solaris is not supported. Both editions will be available later this month.
One of the ease-of-use changes with Platform HPC 2.1 that works in conjunction with the Web portal is a set of templates for automatically provisioning popular scientific applications onto the clusters automatically once the cluster is created. The 2.1 release has templates for ANSYS Mechanical and Fluent, Blast, LS-DYNA, MSC Nastran, Schlumberger ECLIPSE, and Simulia Abaqus right out of the box. Ken Hertzler, vice president of product management at Platform Computing, says these represent better than half of the key applications customers tend to deploy on clusters.
The other big change with Platform HPC 2.1 is that the LSF scheduler and monitoring tools that are part of the tool are all now GPU-aware. The scheduler now keeps track of what server nodes have GPUs in them and places jobs on them as they become available. The scheduler is also tapped into thermal data inside of the server so it can try to balance workloads across server nodes in a way to minimize hot areas in the cluster. Platform Computing has been distributing the CUDA GPU programming environment from Nvidia since August 2009, but this actually makes the grid software aware of the GPUs so they can dispatch GPU-ized parallel code to them. At the moment, Platform HPC 2.1 is the only tool that is GPU-aware and it can only speak the Nvidia GPU language.
Other Platform Computing tools - freestanding LSF editions and the Symphony grid software for financial system grids - will get GPU scheduling capabilities, the company said. And it stands to reason that FireStream GPUs from Advanced Micro Devices and Knights co-processor from Intel will eventually come under the control of Platform Computing's grid taskmasters, too.
There is one more big change with Platform HPC 2.1. In the past, Platform Computing had special versions of its midrange grid stack that were packaged up by Dell and Red Hat as well as its own versions of the code which were peddled by its own sales force as well. Going forward, there is one version of the software, and it is only being sold through channel partners, which include Dell, HP, Cray, and Acer but not IBM.
Platform HPC 2.1 installs on Red Hat Enterprise Linux 5.5, SUSE Linux Enterprise Server 10 SP3 and 11 SP1, Scientific Linux 5.5, and CentOS 5.5. It can provision Windows HPC Server 2008 R2 instances. ®
Sponsored: DevOps and continuous delivery