This article is more than 1 year old

In-depth: Supermicro's youngest Twin is a real silent ice maiden

Don't look now, folks, Trevor's in luuurve

Absolute power is...efficient?

The power supplies got a big update in Supermicro's 2015 refresh. Expect to see 80 Plus Platinum and 80 Plus Titainium supplies throughout all but the lowest end of the various lineups.

Not having identical parts in the different nodes to cross compare, reviewing the PSUs was a little hard, but overall, I believe Supermicro has surpassed the required efficiency claims for each standard ranking.

The two Twins I was sent for testing claim to be 1280W 80 Plus Platinum. This means that they should be 89 per cent efficient at 100 per cent load, 92 per cent efficient at 50 per cent load, 90 per cent efficient at 20 per cent load and there is no efficiency requirement for 10 per cent load (which there is for Titanium rating.)

The demo nodes came with 256GB of RAM, two Intel 2680 v3 CPUs and four 15K SAS disks per node. Fully loaded with an nVidia GRID K2 card per node (borrowed from my Caesium cluster) and some additional disks, I should have been able to pull 500W per node: 225W for the GRID card; 120W per CPU; and 35W for the RAM and disks.

The raw numbers say that at 50 per cent load I should see 92 per cent efficiency, or have to pull 543W from the wall to feed 500W into the system. Similarly, at 100 per cent load I should see 89 per cent efficiency or pull 562W from the wall to feed 500W into the system.

Loading up one node and flattening it pulled only 493W from the wall, meaning that perhaps nVidia and Intel's power utilization numbers were a little conservative. Both nodes in (and run to the red line) and I pulled 998W from the wall, or 202.43 per cent of the single node value. One node idle pulled only about 150W from the wall, which is the best I've ever seen for something with this sort of loadout. Supermicro has some crazily efficient power supplies to show off, and they should be proud.

The additional bonus to the 2015 Twins is that they are quiet. For servers, the quietest idle servers I've ever heard that are direct from a vendor. I've built quieter, but usually this requires 4U chassis with 200mm Vantec fans and a design for silence from start to finish. Fully loaded, these servers sound like a fleet of fighter jets taking off in a hurricane during the final, climactic battle of Armageddon, just as the universe itself is being torn asunder. (Or about as loud as a Tintri T-850). But they back right down to being significantly quieter than my switches, and that was a welcome surprise.

Performance

Gauging performance of the server design separate from the bits that go in it is a tough job. Supermicro shipped these servers to me with Intel Xeon 2680 v3 (Grantley) CPUs and 256GB of RAM. That's 12 cores (24 threads) of 2.5Ghz goodness per socket, by 2 sockets per node. That's 96 threads per Twin or 192 threads and 1TB of RAM across 4 nodes crammed into 4U. Or, as my wife refers to it: "Are you ever going to stop mad scientist cackling? It got into creepy about two hours ago".

I threw every benchmark I could at these things, and ultimately my results were in line with the Grantleyreview at Anandtech. I could spew numbers at you but this is about the servers and what they can do, not benchmarking the chips. You can flatten the chips and the GRID cards and nothing in the system backs off. The cooling is enough to handle full load even as I raised ambient temperature to 35 degrees Celsius and let them sit there for the better part of a day. Based on this, I'm confident that you will get the full capability of your chips out of these systems in any standard data center you care to put them in.

While I don't feel there's much value in rehashing the specific speeds and feeds of the CPUs, I do want to talk a little bit about the overall system capability. Intel's Grantley CPUs are quite a step up from previous generations. Combined with DDR4, I can do things with the 2015 Twins that were flat out not possible on previous generations. Neither the LSI 3108 SAS3 in the 2028TP-DC1R nor the LSI 3008 SAS3 in the 2028TP-DC0FR gave me any trouble. They passed the drives back just fine, and I didn't seem to run in to any of the queue depth issues that can plague hyperconverged deployments.

The lack of 10GbE ports on the motherboard is a problem, but I expect Supermicro will have variants with real network cards. I'd ideally like to see the equivalent of a 2028TP-DC1R with a minimum of dual 10GbE ports.

The Grantley CPUs are so powerful, and you can cram in so much RAM per node (up to 1TB) that the pair of weedy little 1GbE NICs are a bit of a joke. The 2028TP-DC0FR does come with those handy-dandy 56Gbit InfiniBand ports, but you give up the 16x PCIe slot the 2028TP-DC1R uses for the GRID card to get them. Both models have half-height x8 and x16 slots, but good luck actually getting two cards in there. The retail Intel NICs I have only allowed me to add one 2-up card per node.

More about

More about

More about

TIP US OFF

Send us news


Other stories you might like