Feeds

1kW of sexy GPU love

Forget rolling your own - pre-canned is back in style

Internet Security Threat Report 2014

When someone provides me the funding I request for a project without fighting me tooth and nail over every bent copper, I believe that I should return the courtesy by going above and beyond to earn the trust that has been placed in me.

One result of this is that the bigger the budget of the project, the pickier I become about the hardware. I have gone over dozens of systems in the past two weeks to ensure that the hardware selection for my upcoming render farm project is the best it can be.

In the past I have often turned to whiteboxing my servers and my desktops. There is always a fair amount of heckling from the peanut gallery over the concept, but has worked well for me in the past.

My local distributor Supercom will take my component list and build out the system for me. They will sell it to me under one of their “house brands” and warranty it for three years. More if you buy the warranty extension. They have been good to me as regards RMAs, Other than Supercom’s limited selection of component manufacturers, I have no legitimate reason to complain.

One of my primary reasons for whiteboxing in the past – apart from price – has been power efficiency. Until relatively recently, it has been a difficult battle to get quality power supplies in Tier 1 equipment. Even if you were buying the highest priced model, you could wind up with some real lemons.

Given that a lot of these prejudices against using Tier 1s were from the early and mid 2000s, I have taken the opportunity of having an actual prototyping budget to take a new look at what’s available.

HPC server stress testing

The prototyping phase of this project saw me test the pre-canned HPC servers from all the Tier 1 vendors. I whiteboxed and benched six of my own different models. My mate and I even got so far as putting together a proof-of-concept rack of 10 1U dual GPU nodes with liquid cooling before deciding the maintenance requirements were too high. My mate (a graphic artist by trade) bought the prototype equipment and now has a spectacular liquid-cooled render farm/BOINC cluster in his basement for his own 3D rendering work.

After all my testing, prototyping and benchmarking I settled on the Supermicro 1026GT-TF-FM205. I am deeply impressed by this hardware.

It comes with a 1400W 80 Plus Gold PSU. (I cannot describe how happy seeing “80 Plus Gold” makes me.) It will support two Westmere Xeons and 192GB of EEC Reg DDR 3 in 12 DIMMs. It has 6 SATA hotswap bays running off the onboard ICH10R SATA controller. It comes with a (very basic) onboard Matrox Video card, dual Intel Gig-E NICs and more importantly an IPMI v2.0 KVM-over-LAN interface with a dedicated Realtek NIC.

Best of all, it comes with 2 Nvidia Fermi M2050 GPUs. Supermicro has here a Dual CPU 1U server with two scoops of sexy GPU, a nice amount of RAM and a great PSU all with thermals certified to run in a 1U box. I think I’m in love.

I bought a handful and tormented them for days. I varied the temperature and the humidity while running all systems flat out. I found that if I can keep a reasonable airflow through to the systems, I can run the datacenter at 25 degrees Celsius, even in dry-as-a-bone Alberta air. Once more, I am impressed; I know a few servers that would be quite upset with me if I fed them air that warm while under heavy load, and those servers aren’t running GPU space heaters.

Under the full load the rendering software can bring to bear, these systems seem to pull around 1000W from the wall. Allowing for top-of-rack networking gear, the bottom-of-rack power conditioners as well as various other accessories, I am aiming to put 25 of these units to a rack. That means dissipating a little over 25kW worth of heat per rack.

For only $6000 Canadian each, these are truly fantastic servers. Supermicro has made my life easy by providing me exactly was I was looking for at a price I am completely willing to pay. My next challenge - designing cooling systems to handle over 25kW per rack “on the cheap” – is the daunting one. ®

Internet Security Threat Report 2014

More from The Register

next story
Docker's app containers are coming to Windows Server, says Microsoft
MS chases app deployment speeds already enjoyed by Linux devs
IBM storage revenues sink: 'We are disappointed,' says CEO
Time to put the storage biz up for sale?
'Hmm, why CAN'T I run a water pipe through that rack of media servers?'
Leaving Las Vegas for Armenia kludging and Dubai dune bashing
Facebook slurps 'paste sites' for STOLEN passwords, sprinkles on hash and salt
Zuck's ad empire DOESN'T see details in plain text. Phew!
Windows 10: Forget Cloudobile, put Security and Privacy First
But - dammit - It would be insane to say 'don't collect, because NSA'
Symantec backs out of Backup Exec: Plans to can appliance in Jan
Will still provide support to existing customers
VMware's tool to harden virtual networks: a spreadsheet
NSX security guide lands in intriguing format
prev story

Whitepapers

Forging a new future with identity relationship management
Learn about ForgeRock's next generation IRM platform and how it is designed to empower CEOS's and enterprises to engage with consumers.
Cloud and hybrid-cloud data protection for VMware
Learn how quick and easy it is to configure backups and perform restores for VMware environments.
Three 1TB solid state scorchers up for grabs
Big SSDs can be expensive but think big and think free because you could be the lucky winner of one of three 1TB Samsung SSD 840 EVO drives that we’re giving away worth over £300 apiece.
Reg Reader Research: SaaS based Email and Office Productivity Tools
Read this Reg reader report which provides advice and guidance for SMBs towards the use of SaaS based email and Office productivity tools.
Security for virtualized datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.