This article is more than 1 year old

Review: Supermicro FatTwin

Trevor likes his servers hot and dense

My testlab has a new arrival: a Supermicro FatTwin™ F617R2-F73. As always when something lands in my lab, I will valiantly kick the crap out of it on behalf of El Reg's discerning readership. There are already a few different systems in my testlab - let's see how this thing stacks up.

I'd like to kick off this review by pointing out that I have a massive prejudice against blade servers and all similar multi-system chassis. I usually represent smaller businesses for whom even a single chassis is a massive capital investment. They generally don't qualify for top-notch enterprise support, so if the chassis goes, it's at least a business day before a replacement is in my hands. That could be fatal to a small business at the height of silly season. Supermicro has a lot to prove before meeting the only criteria that matters: would I bet my business on this device?

Light goes on, light goes off

Outside of full-on blade solutions, the FatTwin is Supermicro's intermediate-density server offering; fewer nodes per U than their blade offering, but more than their Superserver offerings.

In a FatTwin compute node each U of space houses two servers for up to a total of 8 nodes in 4 U. There are over 20 different Fat Twin models (with more on the way). Some are "storage" FatTwin configurations that are only 4 nodes in 4 U, but they carry a lot more 3.5" disks. They also have GPU and Hadoop models.

Each server has its own separate enclosure that slides into the larger chassis. There is no shared backplane as you might find in a blade chassis, instead only the power plane is shared. There are 4 PSUs in this system, all 80+ Platinum (94 per cent) and believe me when I say that for a rig of this specification this baby sips power.

FatTwin PSUs FatTwin hard at work

You can yard out individual nodes while the rest of the chassis is powered up and plug them back in without the unit missing a beat. I have run 4 nodes into the red line on a single PSU with this chassis and played musical PSUs for hours. The power plane on this thing is slick.

In fact, the power plane on this chassis is so slick – and so simple – that it has been awarded a temporary bypass to my innate prejudice against multi-system chassis. There are no active components to get nuked by a stray cosmic ray or fried by a spike of dirty power. I tortured that power plane for hours and couldn't break it; I'd bet my business on that design.

For all the praise I have to heap on the system, the chassis design isn't all roses. The top plate to the individual server enclosures is tricky to get back on. There are these little metal flanges that bend far too easily. This is only really a problem for the minority of users who – like me – are using these systems for a testlab and are thus prying the lid off to swap out parts every couple of hours.

Despite the small number of customers this particular defect would realistically affect, once I'd pointed out the problem, Supermicro immediately set about making design changes to resolve the issue. Count me impressed; who does that anymore?

Playing with the entrails

So the chassis earns my vote, now it's time to see if the node hardware stacks up. Supermicro shipped me 4x nodes to put my chassis. Each contains an X9DRFF-7 motherboard, 2x Intel Xeon 2680 CPUs, 2x Intel 480GB 520 Series SSDs and 128GB of RAM. The specific variant that I have is a front-I/O model; handy, since all my switches are front-of-rack mounting.

Were I so inclined, I could cram 6 2.5" drives and 256GB of RAM into each node. A slightly different variant can get up to 512GB into a single node. The onboard IPMI is fantastic and doesn't show the Java-version pickiness that plagued earlier Supermicro systems; in fact, I haven't hooked up a monitor to the onboard node video at all.

Given that the CPUs are 130W TDP each and mounted fanless inside the chassis, I was more than a little nervous about how these systems would hold up under load.

It turns out that I can take all four nodes into the red at 32°C ambient and the unit doesn't even blink. Benchmarks don't show the system throttling down the CPUs at all. I could probably take it higher than that, but I am having trouble getting the room hotter than 32°C in the middle of a Canadian winter. Supermicro has models rated to 47°C.

The unit shifts a lot of air; more than I would have expected, considering how tightly everything seems designed. The fans on the back of the unit are hot swappable and I can pull half of them out with the ambient temperature at 25°C without thermal issues. Supermicro appears to have put a lot of thought into airflow here.

Performance

The majority of my current server estate is a mix of ASUS KFSN5-D/2x Opteron 2378/64GB (Persephone 3) and ASUS KFN5D-SLI/2x Opteron 2216/16GB (Persephone 2) systems inside Chenbro SR-107 chassis. I also have a pair of Intel i5 3470/ASUS P8Q77-M CSM/32GB (Eris 3) compute nodes.

The inside of a FatTwin

Compared to my existing estate, the performance of the FatTwin is astounding. You don't truly appreciate what 20MB of L3 cache can do for a CPU until you watch a 2.7Ghz (Turbo 3.5Ghz) Sandy Bridge walk all over a 3.2Ghz (Turbo 3.6Ghz) Ivy Bridge in single threaded applications. Rendering the same batch of 500 images (~62.5GB worth) in single-core mode takes 2 hours on a Persephone 2, 45 minutes on a Persephone 3, 15 minutes on an Eris 3 and just under 9 minutes on the FatTwin.

One of my colleagues – the single most reserved, unexcitable human being you will ever meet – borrowed a node for a couple of weeks of testing. Ten minutes after he got the IPMI address he dragged me back into the office in a tizzy - before I was allowed to leave I simply had to watch as Server 2012 installed on this system in under five minutes.

Conclusion

I have a certain special attachment to my beat-up old Persephone 3 servers. They've served me well for years. I just bought those two Eris 3 nodes a few months ago and was planning to get a good six years out of them. Yet the amount of compute you can pack into a FatTwin, combined with the piddling amount of power they draw changes the math. – According to the website my model FatTwin's PSU is1620W; according to my Kill-a-Watt, the 4 nodes I have in mine draw under 30 per cent of that when flattened.

One of my customers has three racks worth of Persephone 2 and 3 servers. I could collapse all three racks into just the 4 nodes in my test lab and still have room on that cluster to spare. What's more; with the rapidly increasing cost of power here in Alberta, the FatTwin would pay for itself in less than a year. When stood up against the FatTwin, those older systems simply aren't worth the cost of the electricity used to power them.

The "power-only backplane for a multi-system chassis" design of the FatTwin is the best compromise between blade-like density and the ability to sleep at night (without worrying about single points of failure) that I've seen yet.

I have more tests yet to run, but first impressions are definitely good. If you are in the market for new servers, Supermicro's FatTwin should be a serious consideration. If for whatever reason you are locked into a preferred vendor solution with someone else, start waving the FatTwin PDFs around and get those vendors to cough up whatever they can that's comparable. ®

Super Micro's 8-node FatTwin server

Review: Supermicro FatTwin

Price: Highly variable - $6,500 barebones up to well over $40,000 fully kitted out RRP

More about

More about

More about

TIP US OFF

Send us news


Other stories you might like