Super Micro bends metal for Super Hadooper data munchers

Server sales carry on, but face headwinds

Boost IT visibility and business value

King of the whiteboxers Super Micro, which is also one of the dominant suppliers of raw system components to other whiteboxers, has launched a line of clusters preconfigured to run the Hadoop big data muncher at the same time that it has reported its financial results for the first quarter of fiscal 2013 ended in September.

Hadoop is a greenfield installation at most companies, so it is a perfect opportunity for Super Micro to get its foot in the door and demonstrate the price/performance advantages it has over the tier one server makers. Hadoop clusters come in different sizes of course, and a lot of companies start out small and more importantly, have different ratios of compute to storage for their Hadoop workloads.

And so Super Micro is offering both a 14U entry cluster and a 42U full-sized cluster as a basic building block and also a bunch of different two-socket server Xeon E5 server nodes with anywhere from four to twelve drives per node to store data in the Hadoop Distributed File System. The turnkey Hadoop clusters are certified to run Cloudera's CDH.

In general, with the core speed more or less constant for the past several years, companies try to give each core in a system its own disk drive, and like other systems on the market today, the Super Micro Hadoop clusters will probably not be configured with top-bin Xeon E5 parts, which would cram 16 cores into a two-socket server.

You might go with four-core and six-core Xeon E5s instead, especially considering that Hadoop is generally I/O and disk bound, not CPU bound. And, as you might expect, Super Micro's techies know this and are configuring the NameNode control freak for the Hadoop as well as the data nodes where processing is done and data stored with four-core 2.4GHz Xeon E5-2620 processors from Intel.

Super Micro's preconfigured Hadoop clusters

Super Micro's preconfigured Hadoop clusters

The SuperServer 815 NameNode server has two of these E5-2620 processors and 48GB of main memory. This is a 1U server node with four 3.5-inch SAS or SATA drives, and given the fact that the NameNode server is what keeps track of what data is stored where on HDFS,

Super Micro is recommending customers use fast 15K RPM SAS drives with 146GB of capacity. This same machine can be configured with 24GB of memory and either 1TB or 2TB drives and used as a data node if that floats your boat, but it will probably not give the right core-to-disk ratio.

The SuperServer 825 is probably a better data node, which comes in a 2U chassis and which has eight 3.5-inch drives. It would be better still, in terms of density, to have a two-socket machine that used the slowest eight-core Xeon E5s (and therefore the cheapest ones) and have sixteen 3.5-inch drives in a 2U chassis. But that would require mounting eight drives on the front and eight in the back or internally, and this is not an option. The SuperServer 826 data node has a dozen 3.5-inch SATA drives across the front of the 2U chassis feeding into its two-socket motherboard.

Super Micro is offering FatTwin tray servers in the preconfigured Hadoop stacks. One FatTwin setup has six 3.5-inch drives mounted on the front of each of the four server nodes in the 4U FatTwin chassis plus two drives per node that mount in the back, for a total of eight drives per node.

With a four-socket E5-2620, this is the right balance for Hadoop, and give you twice the density as the SuperServer 825 setup above. You can see now why companies are doing tray servers rather than standard rack servers for cloudy apps. Now you know why Super Micro didn't invent a sixteen-drive 2U rack server as mentioned above.

And if you want to use six-core Xeons or just have more and bigger cooling fans in each node, then there is a FatTwin configuration that is based on a server node that has four 3.5-inch drives in the front that are hot swappable plus up to six static 3.5-inch drives mounted on each server.

You could just plunk four drives on each node, plus the four in the front, to get the eight core-to-eight drive ratio right. Intel doesn't make a six-core Xeon processor, but you could no doubt buy six-core processors and turn off one of the cores dynamically and get slightly better density for the Hadoop cluster without sacrificing balance.

The prebuilt Hadoop clusters from Super Micro can be configured with the company's own Gigabit or 10 Gigabit Ethernet switches, and the consensus on the street is that Gigabit Ethernet is good enough for most clusters, but if you are going to be using any real-time extensions or NoSQL database add-ons to Hadoop, you might find that bandwidth is your new bottleneck, so choose carefully.

Pricing for the new clusters was not available.

Burned by memory and disk inventories

Super Micro also announced its financial results for the quarter ended in September, which is the first quarter of its fiscal 2013 year. Sales were up 9.2 per cent year-on-year to $270.7m, but you have to remember that the third quarter was weak for Super Micro a year ago because of the delayed launches of Intel's Xeon E5 and Advanced Micro Devices' Opteron 6200 processors. That revenue was at the low end of the company's guidance.

More telling, perhaps, is that sales at Super Micro were down 1.9 per cent sequentially from the fourth quarter of fiscal 2012 ended in June, which is not what you would have expected as the Xeon E5 ramp was getting going. Intel and AMD have both seen their server processor businesses under pressure, and so has IBM, which reported its System x server sales were off. Dell and Hewlett-Packard have yet to report the numbers for their latest quarters.

During the September quarter, research, development, sales, and market costs all rose significantly for Super Micro, and that cut its profits by nearly a factor ten, down to $899,000 from $8.5m in the year-ago period.

In a conference call with Wall Street analysts, Charles Liang, founder and CEO of Super Micro, said that systems represented 39.5 per cent of total business, at $107m, and that Xeon E5-based machines accounted for 40 per cent of company revenues across both the systems and components businesses at the company, up from 22 per cent in Q4 fiscal 2012 ended in June.

Howard Hideshima, CFO at the company, said that Super Micro shipped 55,000, down 3.5 per cent from the year ago period. But average selling prices for completed machines came in at $2,000, up 17.6 per cent from a year ago and therefore system sales by Super Micro are actually up 9.2 per cent year-on-year.

That said, the sequential decline from 62,000 machines peddled in the quarter ended in June is a bit disconcerting. ASPs have held stable at $2,000 in the past three quarters.

Super Micro raked in $164m in subsystems and component sales, which includes motherboards, daughter cards, machine enclosures, and so on, which other companies use to build machinery. That is a 17 per cent jump over the year-ago quarter and 7 per cent increase sequentially. The company sold 1.04 million subsystems and components, up 6.5 per cent from Q1 fiscal 2012.

In Q1 fiscal 2013, hyperscale data centers drove 8.8 per cent of total revenues, or about $23.8m, and Hideshima said that some big data center customers had delayed purchases. In Q4 2012, hyperscale data center customers who buy directly from Super Micro accounted for 15.2 per cent of revenues, or $42m, and in the quarter ended in March it was $38m.

Super Micro's guidance for the second quarter of fiscal 2013 ending in December is much the same as last quarter, with sales expected to come in between $270m to $295m, and non-GAAP earnings should fall between 12 and 16 cents per share. ®

The essential guide to IT transformation

More from The Register

next story
The Return of BSOD: Does ANYONE trust Microsoft patches?
Sysadmins, you're either fighting fires or seen as incompetents now
Microsoft: Azure isn't ready for biz-critical apps … yet
Microsoft will move its own IT to the cloud to avoid $200m server bill
Oracle reveals 32-core, 10 BEEELLION-transistor SPARC M7
New chip scales to 1024 cores, 8192 threads 64 TB RAM, at speeds over 3.6GHz
Docker kicks KVM's butt in IBM tests
Big Blue finds containers are speedy, but may not have much room to improve
US regulators OK sale of IBM's x86 server biz to Lenovo
Now all that remains is for gov't offices to ban the boxes
Gartner's Special Report: Should you believe the hype?
Enough hot air to carry a balloon to the Moon
Flash could be CHEAPER than SAS DISK? Come off it, NetApp
Stats analysis reckons we'll hit that point in just three years
Dell The Man shrieks: 'We've got a Bitcoin order, we've got a Bitcoin order'
$50k of PowerEdge servers? That'll be 85 coins in digi-dosh
prev story


5 things you didn’t know about cloud backup
IT departments are embracing cloud backup, but there’s a lot you need to know before choosing a service provider. Learn all the critical things you need to know.
Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
Build a business case: developing custom apps
Learn how to maximize the value of custom applications by accelerating and simplifying their development.
Rethinking backup and recovery in the modern data center
Combining intelligence, operational analytics, and automation to enable efficient, data-driven IT organizations using the HP ABR approach.
Next gen security for virtualised datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.