This article is more than 1 year old

Build a BONKERS test lab: Everything you need before you deploy

Trevor Pott reveals his server room's crash-test dummies

Now for a crucial decision: What kind of storage to use?

When it comes to storage you have to make some choices. Are you going to put storage on each of your devices or are you going to use a centralised storage system? Each have their advantages.

Certainly your costs are lower by using direct attached storage in a test bed environment; there's no need to worry about anything more than RAID 1 in your virtualisation nodes. Unfortunately moving virtual machines from node to node in this configuration is slow and frustrating. This frustrates me in a production environment where I am only moving virtual machines around on an irregular basis. In the lab, this would drive me certifiably insane.

Despite the extra cost, I've decided that the time has come for my low-end test lab to have centralised storage. VMware can't load VMs off of SMB, so SMB is out. NFS is a pain in the ASCII on Windows, so vaya con dios, NFS. This leaves me with iSCSI and Fibre Channel and I can't think of a single good reason to even consider Fibre Channel.

So I'm hunting for a good iSCSI target. If I am looking to build my own system, Windows makes a solid storage server. There is a downloadable iSCSI target for Server 2008 R2 and Server 2012 comes with one baked into the OS. There is a great target for Linux, if you have the time.

As a build-your-own item, I'm going to base a roll-your-own storage system off of my Eris 3 compute nodes; I have a few of these in production already, they work fine. Changing out the case for a Chenbro RM31408 is a little pricey, but this chassis has served me well.

I'll also need to add a SATA controller, and two mini-SAS cables. This makes my storage node a hair under $1500. It's just under $2250 if I use Windows Server 2012.

That's not a bad deal, but I am generally lazy and don't like the idea of patching my storage server's operating system all that regularly. Linux iSCSI targets are frustrating to set-up and maintain, and Windows RAID-5 is agonisingly slow. So let's look at appliances.

My Synology DS411J is old, slow and has developed a newly alarming habit of rebooting once a month or so. Despite this, it has served me well as an iSCSI target for some time. I can only presume that newer, faster brethren would do the same, though I sadly cannot say so from experience. The Synology DS1812+ goes for about $1000 on Amazon.

I've had a Drobo B1200i in my lab for testing this past month and I'm sold on it for reliability and ease of use. I can't find it shipping without drives, but with 6x 2TB drives it costs at least $11,000 - out of reach for an “on the cheap” testbed. The B1200i does have a little brother; the B800i. This is available from a far more reasonable $2500-ish from Amazon.

As this is a test bed – not a production system – skipping the added expense of enterprise drives is okay. We're doing everything software RAID (or southbridge RAID), anyways. The Seagate 3TB 7200.14 is a truly exceptional consumer disk. I've recently done a round of benchmarks for an SSD review, and was surprised to see how strong this drive's performance was. These disks are $140 each.

Putting it all together

Could this be my biggest fan?

If we go with direct attached storage, we can build out solid compute nodes with 4 cores, 32GB of RAM, dedicated VPro NIC, 2 1GbE ports for the hypervisor and 3TB of RAID 1 storage for $1015 a piece. I have verified that this build can and does work with both VMware ESX 5.1 and Hyper-V 2012.

For $7305 you can get 7 DAS-configured Eris-3 compute nodes and a 24-port switch (with three ports open to trunk to the rest of your network.) That's 21TB of virtual datastore, 28 cores and 224 GB of RAM. If you use Microsoft's Hyper-V Server, you can use all of that as one big cluster for free. That is a test lab that could put several production setups to shame.

If you forgo DAS for iSCSI, you can get 7 compute nodes, the switch and your 8 drives for $6465. With the Synology DS1812+ NAS: $7465 With a homemade Linux iSCSI server: $7965 With a homemade Windows iSCSI server: $8715 With the Drobo B800i: $8965

As you can see, the Synology option is only about $160 more than the direct-attached storage option. Even the most expensive option explored here – the Drobo B800i – only adds $1660 to the cluster. Each of the storage options have their pros and cons. Regardless of the choice you make, it's clear that we can build a truly amazing, fully lights-out-managed for cheap.

In future Build A Bonkers Test Lab segments, I'll explore some of the options available in more detail. I'll also take a look at building a more expensive (and more fun!) future-proofed test lab. This will include 10 gigabit Ethernet and the storage (read: flash) arrays necessary to feed it. I'll explore whether or not you'll see real benefit from converged network adaptors, run tests against various 10GbE switches, explore backup options and take that Drobo B1200i for a joyride. ®

More about

TIP US OFF

Send us news


Other stories you might like