This article is more than 1 year old

Build a BONKERS test lab: Everything you need before you deploy

Trevor Pott reveals his server room's crash-test dummies

Part one Every systems administrator needs a test lab and over the course of the next month I am going to share with you the details of my latest.

In part one of The Register's Build a Bonkers Test Lab, we look at getting the best bang for your buck and doing it all on the cheap. Here is a look at my “Eris 3” test lab nodes; I have these deployed extensively, both as physical workstations for end users and as the core for my test lab.

Designing your test lab can be a very individualised experience. Every company has different goals, different products that they use, different things they need.

Thanks to virtualisation we are more capable than ever of meeting the goals required by test labs in a standardised way. As such, most modern test labs will have to be able to test both physical and virtual workloads.

The compute nodes

An ideal test lab is one that I do not have to physically be in front of to use. But in my test lab I will probably do things such as reloading operating systems on a regular basis. Even though I run most things virtualised today, I will need to change hypervisors from time to time, and / or test workstation workloads that cannot be virtualised. Ideally, I'll need a full lights-out management system such as an IP KVM.

A motherboard in a box

Ah! The sweet smell of fresh electronics

Intel to the rescue! Today's VPro systems offer me exactly what I need at a price I can afford. VPro-capable systems also have all the elements necessary for hardware virtualisation. Be careful about your motherboard selection (look for things with “Q” series chipsets) and your processor choice. Not all of Intel's CPUs support both VPro and the full suite line of hardware virtualisation options.

I feel the Intel i5 3470 meets the necessary requirements for this project - and at near “sweet spot” processor pricing. It has 4 cores, does VPro, and goes up to 32 gigs of RAM; the maximum currently supported by VMware's free hypervisor.

Next up for the lab is the Asus P8Q77-M/CSM, a small, fairly well-designed and ATX motherboard and supports the 32 gigs of RAM. I tend to prefer Supermicro boards when I can get them, but they don't have a micro-ATX Q77 board. Q77 is important for my test lab; they come with two SATA 6gbit ports whereas the B75 based-boards only come with one.

I chose four sticks of fairly generic Corsair RAM. I stick to DDR-1333 because DDR-1600 has given me a lot of trouble in Ivy Bridge systems these past few months. I chose Corsair only because my preferred item – a Kingston KVR1333D3N9HK4/32G kit – was not in stock.

This all went into a generic Inwin BL631 micro-ATX case; it's small, which is good, as space considerations quickly become an issue in my lab. You can of course cookie-sheet the motherboards or even build your own blade system. (Look here for Mini-ITX.)

The cost of an Eris 3 compute node from my local retailer is:

This brings us to $560 per node for 4 cores of compute on 32 GB of RAM with full lights out management. Not a bad start, but we need to add a few things yet.

The Network

The motherboard I chose does not have a separate management network for the VPro network, nor does VMWare's ESXi 5.1 like the onboard card all that much. As such I leave the onboard NIC dedicated to VPro management and additional network cards to support my hypervisors.

For my additional NICs I choose an Intel dual port one gig network card. There are two reasons I choose Intel for my test lab. The first is because I have a pile of them laying around; Intel provided me a stack of NICs for testing and I intend to test them! The other reason is far more pragmatic: Intel's network cards are the only ones I trust to “just work” in any environment. Whether the OS is Windows, Linux or VMware, the last Intel NIC to give me grief is now almost a decade old. The drivers just work. Operating systems see the cards. Considering how often I'll be rebuilding a test lab, reliability trumps all.

The 2up Intel card gives me a management NIC and a NIC for my VMs to use. Considering that I am doing this on the cheap – and that I want to stick with “known good” components throughout – my network switch is going to say D-Link on the front.

The DGS 1024D and DGS-1210-48 switches have served me well for years. I have a lot of these lying around and no real incentive to look elsewhere.

The NICs add $175 to our compute nodes, bringing the cost of each node up to $735 a piece. The D-link switches will run you $200 for 24 ports or a little over $550 for 48.

More about

TIP US OFF

Send us news


Other stories you might like