Original URL: http://www.theregister.co.uk/2012/12/31/building_it_test_lab/

Build a BONKERS test lab: Everything you need before you deploy

Trevor Pott reveals his server room's crash-test dummies

By Trevor Pott

Posted in Data Centre, 31st December 2012 06:03 GMT

Part one Every systems administrator needs a test lab and over the course of the next month I am going to share with you the details of my latest.

In part one of The Register's Build a Bonkers Test Lab, we look at getting the best bang for your buck and doing it all on the cheap. Here is a look at my “Eris 3” test lab nodes; I have these deployed extensively, both as physical workstations for end users and as the core for my test lab.

Designing your test lab can be a very individualised experience. Every company has different goals, different products that they use, different things they need.

Thanks to virtualisation we are more capable than ever of meeting the goals required by test labs in a standardised way. As such, most modern test labs will have to be able to test both physical and virtual workloads.

The compute nodes

An ideal test lab is one that I do not have to physically be in front of to use. But in my test lab I will probably do things such as reloading operating systems on a regular basis. Even though I run most things virtualised today, I will need to change hypervisors from time to time, and / or test workstation workloads that cannot be virtualised. Ideally, I'll need a full lights-out management system such as an IP KVM.

A motherboard in a box

Ah! The sweet smell of fresh electronics

Intel to the rescue! Today's VPro systems offer me exactly what I need at a price I can afford. VPro-capable systems also have all the elements necessary for hardware virtualisation. Be careful about your motherboard selection (look for things with “Q” series chipsets) and your processor choice. Not all of Intel's CPUs support both VPro and the full suite line of hardware virtualisation options.

I feel the Intel i5 3470 meets the necessary requirements for this project - and at near “sweet spot” processor pricing. It has 4 cores, does VPro, and goes up to 32 gigs of RAM; the maximum currently supported by VMware's free hypervisor.

Next up for the lab is the Asus P8Q77-M/CSM, a small, fairly well-designed and ATX motherboard and supports the 32 gigs of RAM. I tend to prefer Supermicro boards when I can get them, but they don't have a micro-ATX Q77 board. Q77 is important for my test lab; they come with two SATA 6gbit ports whereas the B75 based-boards only come with one.

I chose four sticks of fairly generic Corsair RAM. I stick to DDR-1333 because DDR-1600 has given me a lot of trouble in Ivy Bridge systems these past few months. I chose Corsair only because my preferred item – a Kingston KVR1333D3N9HK4/32G kit – was not in stock.

This all went into a generic Inwin BL631 micro-ATX case; it's small, which is good, as space considerations quickly become an issue in my lab. You can of course cookie-sheet the motherboards or even build your own blade system. (Look here for Mini-ITX.)

The cost of an Eris 3 compute node from my local retailer is:

This brings us to $560 per node for 4 cores of compute on 32 GB of RAM with full lights out management. Not a bad start, but we need to add a few things yet.

The Network

The motherboard I chose does not have a separate management network for the VPro network, nor does VMWare's ESXi 5.1 like the onboard card all that much. As such I leave the onboard NIC dedicated to VPro management and additional network cards to support my hypervisors.

For my additional NICs I choose an Intel dual port one gig network card. There are two reasons I choose Intel for my test lab. The first is because I have a pile of them laying around; Intel provided me a stack of NICs for testing and I intend to test them! The other reason is far more pragmatic: Intel's network cards are the only ones I trust to “just work” in any environment. Whether the OS is Windows, Linux or VMware, the last Intel NIC to give me grief is now almost a decade old. The drivers just work. Operating systems see the cards. Considering how often I'll be rebuilding a test lab, reliability trumps all.

The 2up Intel card gives me a management NIC and a NIC for my VMs to use. Considering that I am doing this on the cheap – and that I want to stick with “known good” components throughout – my network switch is going to say D-Link on the front.

The DGS 1024D and DGS-1210-48 switches have served me well for years. I have a lot of these lying around and no real incentive to look elsewhere.

The NICs add $175 to our compute nodes, bringing the cost of each node up to $735 a piece. The D-link switches will run you $200 for 24 ports or a little over $550 for 48.

Now for a crucial decision: What kind of storage to use?

When it comes to storage you have to make some choices. Are you going to put storage on each of your devices or are you going to use a centralised storage system? Each have their advantages.

Certainly your costs are lower by using direct attached storage in a test bed environment; there's no need to worry about anything more than RAID 1 in your virtualisation nodes. Unfortunately moving virtual machines from node to node in this configuration is slow and frustrating. This frustrates me in a production environment where I am only moving virtual machines around on an irregular basis. In the lab, this would drive me certifiably insane.

Despite the extra cost, I've decided that the time has come for my low-end test lab to have centralised storage. VMware can't load VMs off of SMB, so SMB is out. NFS is a pain in the ASCII on Windows, so vaya con dios, NFS. This leaves me with iSCSI and Fibre Channel and I can't think of a single good reason to even consider Fibre Channel.

So I'm hunting for a good iSCSI target. If I am looking to build my own system, Windows makes a solid storage server. There is a downloadable iSCSI target for Server 2008 R2 and Server 2012 comes with one baked into the OS. There is a great target for Linux, if you have the time.

As a build-your-own item, I'm going to base a roll-your-own storage system off of my Eris 3 compute nodes; I have a few of these in production already, they work fine. Changing out the case for a Chenbro RM31408 is a little pricey, but this chassis has served me well.

I'll also need to add a SATA controller, and two mini-SAS cables. This makes my storage node a hair under $1500. It's just under $2250 if I use Windows Server 2012.

That's not a bad deal, but I am generally lazy and don't like the idea of patching my storage server's operating system all that regularly. Linux iSCSI targets are frustrating to set-up and maintain, and Windows RAID-5 is agonisingly slow. So let's look at appliances.

My Synology DS411J is old, slow and has developed a newly alarming habit of rebooting once a month or so. Despite this, it has served me well as an iSCSI target for some time. I can only presume that newer, faster brethren would do the same, though I sadly cannot say so from experience. The Synology DS1812+ goes for about $1000 on Amazon.

I've had a Drobo B1200i in my lab for testing this past month and I'm sold on it for reliability and ease of use. I can't find it shipping without drives, but with 6x 2TB drives it costs at least $11,000 - out of reach for an “on the cheap” testbed. The B1200i does have a little brother; the B800i. This is available from a far more reasonable $2500-ish from Amazon.

As this is a test bed – not a production system – skipping the added expense of enterprise drives is okay. We're doing everything software RAID (or southbridge RAID), anyways. The Seagate 3TB 7200.14 is a truly exceptional consumer disk. I've recently done a round of benchmarks for an SSD review, and was surprised to see how strong this drive's performance was. These disks are $140 each.

Putting it all together

Could this be my biggest fan?

If we go with direct attached storage, we can build out solid compute nodes with 4 cores, 32GB of RAM, dedicated VPro NIC, 2 1GbE ports for the hypervisor and 3TB of RAID 1 storage for $1015 a piece. I have verified that this build can and does work with both VMware ESX 5.1 and Hyper-V 2012.

For $7305 you can get 7 DAS-configured Eris-3 compute nodes and a 24-port switch (with three ports open to trunk to the rest of your network.) That's 21TB of virtual datastore, 28 cores and 224 GB of RAM. If you use Microsoft's Hyper-V Server, you can use all of that as one big cluster for free. That is a test lab that could put several production setups to shame.

If you forgo DAS for iSCSI, you can get 7 compute nodes, the switch and your 8 drives for $6465. With the Synology DS1812+ NAS: $7465 With a homemade Linux iSCSI server: $7965 With a homemade Windows iSCSI server: $8715 With the Drobo B800i: $8965

As you can see, the Synology option is only about $160 more than the direct-attached storage option. Even the most expensive option explored here – the Drobo B800i – only adds $1660 to the cluster. Each of the storage options have their pros and cons. Regardless of the choice you make, it's clear that we can build a truly amazing, fully lights-out-managed for cheap.

In future Build A Bonkers Test Lab segments, I'll explore some of the options available in more detail. I'll also take a look at building a more expensive (and more fun!) future-proofed test lab. This will include 10 gigabit Ethernet and the storage (read: flash) arrays necessary to feed it. I'll explore whether or not you'll see real benefit from converged network adaptors, run tests against various 10GbE switches, explore backup options and take that Drobo B1200i for a joyride. ®