EMC gets fat and flashy with Greenplum appliances
Take that, Teradata, Exadata, Netezza
For one brief shining moment, when it bought Data General a zillion years ago, EMC was a server maker, and last year's acquisition of Greenplum  makes it a server vendor (of sorts) once again. More like a data analytics systems integrator, but let's not split hairs.
The original Data Computing Appliance, or DCA, was based on Sun Fire x64-based servers from Sun Microsystems. But Luke Lonergan, chief technology officer at EMC's Greenplum unit, tells El Reg that it is giving customers a choice of OEMed two-socket servers from Dell, Hewlett-Packard, and Huawei as the basis of the mainstream DCA boxes the company is now selling. Those server options are also available for two new variants of the DCA machine, equipped with either fat disks for extra capacity or solid state disks for faster data access. These are the High Capacity DCA and the High Performance DCA, using the EMC nomenclature.
You might be thinking that Greenplum could just use Vblock configurations based on the "California" Unified Computing System, given that EMC, Cisco, and VMware are all buddy-buddy in the Virtual Computing Environment Company  partnership formerly known as Acadia. The Vblocks take Cisco's converged switching and servers, EMC's storage, and VMware's server virtualization and wrap them together in three different configurations that are supposed to show up preassembled and ready to load server instances on.
While this is great for server virtualization, a Vblock is not necessarily the best platform on which to run a clustered database and data analytics workloads. And so Greenplum is for the moment sticking to local disk or flash storage on server nodes in the DCA clusters and using 10 Gigabit Ethernet links between server nodes so they can munch each other's data and, if you believe all the talk, turn it into information.
EMC's Greenplum data analytics appliance
The current DCA server nodes are kosher 2U rack-mounted boxes with two sockets and sporting Intel's six-core "Westmere-EP" Xeon 5600 processors. The base DCA box has 600GB SATA disks, and it was suitable for a wide range of workloads. The DCA has two master servers controlling the cluster and 16 segment servers for storing databases, with a total of 192 cores for chewing through data.
These segment server nodes have an aggregate of 768GB of main memory and one disk drive per core. The DCA has 36 TB of uncompressed usable capacity for a data warehouse and 144 TB with compression turned on; it can scan data at 24GB/sec and has a data load rate of 10TB/hour. The DCA can scale up to six racks in a single system, and you can buy in quarter-rack increments, according to Lonergan.
With the High Capacity DCA, Greenplum is swapping out the 600GB drives and replacing them with 2TB SATA disks spinning at 7200 RPM. Some DCA customers were looking to chew on larger data sets, and the same set of server nodes, Greenplum can now offer 124TB of usable capacity (496TB compressed). Nothing is free of course, and there is a performance hit with the larger capacity.
"This is mostly a bandwidth game," explains Lonergan. "The outer platter of a SATA drive is spinning more than enough to saturate a SATA controller."
When you do the numbers, the High Capacity DCA has a scan rate of 16GB/sec, a 33 per cent hit compared to the regular DCA, and the data load rate is cut in half, to 4.8TB/hour. You can scale this High Capacity DCA up to six racks, just like the plain DCA.
If you really want speed, then Greenplum has the new High Performance DCA, which puts 24 solid state disks into a server chassis that has four half-node, two-socket Xeon 5600s servers in the chassis. Six SSDs are allocated to each node, which has only one six-core Xeon 5600 plugged in. This yields the same one drive, one core ratio of the original DCA. The SSDs link into the server nodes over 6Gb/sec SAS channels.
The High Performance DCA has two master servers, and a total of 14 additional rack servers yielding 56 segment servers. All told, they have 1,344GB of aggregate main memory, 336 cores, and 336 SSDs. The usable capacity of this flashy DCA is actually 44TB (176TB compressed), which is more than the plain vanilla DCA. The High Performance DCA has a database scan rate of 72GB/sec - three times the regular DCA - and a data load rate of 20TB/hour - twice the vanilla cluster. This version of the machine only comes in a single rack; you can't scale it any further.
Lonergan tells El Reg that EMC doesn't expect the flashy DCA to be a big seller in terms of numbers of racks, mainly because SSDs are still twice as expensive as disks. He did not provider pricing, but Lonergan said that the target for the High Performance DCA was to yield four times the performance for twice the money. "We did much better than that," he says. (Scan rate was only 3X and load rate was only 2X the performance, as noted above. But neither of these is application performance, which is what Lonergan was referring to.)
In addition to the new iron, EMC is rolling out Greenplum Database 4.1, which the company says has better integration with Hadoop clusters as well as an extended range of analytical functions.
SAS Institute was also part of EMC's Greenplum announcements. SAS is cooking up a parallel cluster version of its analytics tools, and has run some tests putting the SAS tools on top of a DCA appliance. The SAS code is currently limited to the size of main memory of a single server, according to Lonergan, and that explains in part why SAS was so popular on Sun Microsystems Unix boxes for so many years. But big analytical jobs take a long time. In one benchmark that SAS and EMC have run, it took 27 hours to crunch some data using the SAS tools on a big (and unspecified) SMP box. On a 32-node vanilla DCA, this same analysis took 50 seconds.
SAS says the Greenplum version of its data analytics suite will be available in the fourth quarter of this year. Looks like EMC will be knocking on the doors of Sun/Oracle shops that have SAS tools running. ®