'Til heftier engines come aboard, HP Moonshot only about clouds
And those engines will come – as will FPGAs, DSPs, GPUs ...
Analysis The HP Moonshot hyperscale servers are not even fully launched, and Intel and Calxeda are already bickering about whose server node is going to be bigger and better when they both ship improved processors for the Moonshot chassis later this year. Other engines will be coming for the Moonshot machines, too, HP execs tell El Reg, and they will be sorely needed if the Moonshot boxes are to do real work across a wider range of software.
With the fairly limited performance of the dual-core "Centerton" Atom S1200 processors that were used in the initial "Gemini" server nodes announced on Monday, the machines are at this point relegated to dedicated hosting for very small server workloads and for modest web application serving.
HP may be running a portion of its hp.com website, which gets 3 million hits a day, on the Moonshot Atom S1200 iron, and it may be only burning 720 watts doing so, but this is a fairly tiny portion of the entire HP web site.
It is going to take more powerful processors to do the heavy lifting of an ecommerce site or to run back-end applications for HP's own business. Or those of any other business, which is really the point. This is all about HP trying to get companies to buy its hyperscale servers, rather than build their own or go to Open Compute Project designs.
El Reg has no doubt that a single rack of Moonshot machines, which comes in at 47U because the Moonshot 1500 chassis is a non-standard 4.3U high, can do the webby or hosting work of eight racks of 1U rack servers with two Xeon or Opteron processors.
And when you do the math – assuming what we presume is pretty poor but nonetheless typical utilization on those two-socket x86 boxes – a rack of the Moonshot servers based on the Atom S1200 processors uses 89 per cent less energy and 80 per cent less space, at a cost per node that is 77 per cent lower.
But again, that is for a pretty precise and not particularly heavy workload. No one is going to build a Hadoop cluster on the current Moonshot designs – at least not one that is more than a science project.
The day will come, however, when HP has the right engines to run heavier workloads. Because as Jim Ganthier, general manager of Industry Standard Servers and Software at HP, explained to El Reg, HP thinks it can add server cartridges, switching modules, or storage cartridges to the Moonshot boxes at an accelerated pace, compared to the 18 to 24 month cadence of its ProLiant rack, BladeSystem blade, and SL6500 scalable systems machines. We're talking about new cartridges on a 4 to 8 month cadence, or about three times faster than what we are used to these days in X86 Server Land.
"You can come out with something at the speed of need," as Ganthier put it.
Part of that speedup that Ganthier is talking about is an illusion that comes from having more than one or two processor suppliers, as is the case with HP's servers these days. When you broaden the compute engines to include various ARM processors as well as digital signal processors, field programmable gate arrays, GPU coprocessors, and hybrid CPU-GPU chips, it is no wonder that the pace of innovation has to pick up.
Let's take a look under the Moonshot hood
It is not clear why HP needed that extra bit of space that pushed it into an oddball server chassis size and therefore a non-standard rack size, but it is far more likely that HP figured out it could get away with 47U racks and worked backwards to come up with a server cartridge and chassis spec that provided the maximum density of wimpy compute nodes.
Top view of the Moonshot 1500 chassis
The original, first-generation "Redstone" Moonshot machines from November 2011 were based on the 4U SL6500 chassis, which had four server trays. Using the 32-bit Calxeda quad-core ECX-1000 processors, each of the 18 server cards in a single tray could host four processors, each with four SATA ports and one memory stick. That gave you 288 server nodes in a 4U space, and it included the distributed Layer 2 switch to link the nodes together.
However, that did not include any storage, and if you wanted local storage on the nodes, you had to buy disk cards that slotted into the PCI-Express slots that made up the Redstone backplane. So call it 144 nodes in 4U, or 36 servers per rack unit.
With the Moonshot 1500 chassis, the backplane slides into the bottom of the chassis from the front and snaps into the dual 1,200 watt power supplies in the back of the chassis. The five dual-rotor, hot-plug fans that cool the server nodes are in the back of the chassis. The chassis includes a chassis management module, which has a subset of the Integrated Lights-Out (ILO) server management controller used in ProLiant and BladeSystem machines.