This article is more than 1 year old

Microsoft's cloud leaves manual transmission behind

Get to grips with autodeploy

Nothing is simple

While the deploying to the metal part seems straightforward enough, getting your image ready is an entirely different story. There are two routes you can choose, and despite what the entire rest of the internet will tell you, each of them has its place.

The goal is to create a virtual hard disk (VHD) that you can deploy out to the various servers. Remember, Windows can boot from VHD now so hard drive images in a Microsoft world are all VHDs.

The quick-and-dirty way to do this is to install Windows on a server identical to the class of server you want to deploy to, get it exactly the way you want, sysprep it, use SCVMM to grab the image and then use that to create your hosts.

This is very 1990s Ghostcast server, with the little twist that SCVMM can get inside the image during deployment and do things such as rename the target host, join it to the domain and configure networking.

I call it the "pets 2.0” approach because it is the optimal use of time for SMEs that deploy servers only once every few years and don’t mind taking a few hours out to patch a new host on the rare occasion where a hardware failure requires something to be reimaged.

Pets 2.0 will, of course, open up the systems administrator to relentless mocking by the software-defined-cattle-ranchers-as-a-service DevOpers of tomorrow. They live in a world at scale where this is absolute foolishness for any number of reasons. Their approach is clean installs and answer files, the stuff of post-traumatic stress disorder flashbacks for my fellow victims of WDS.

You can add applications that will install after the operating system is deployed

To be cattle compliant in our automated Microsoft future you need to download the free Microsoft Deployment Toolkit, which is explicitly designed for creating reference images.

In addition to the operating system you can add applications that will install after the operating system is deployed. Naturally these have to be packaged properly (MSI) by the vendor, otherwise it is post-install scripting for you.

At least with a Hyper-V system you shouldn't need to install too many applications that are not capable of automated installs. Sadly, corefig doesn't have an MSI installer.

The Microsoft Deployment Toolkit used to create the cattle-compliant VHD also allows you to define roles and features to be installed post install. It can install the applications you have added at various points in the install: pre-Windows update, post-Windows update and it can even trigger a second-round Windows update after that.

After that has all been all dealt with you can run custom scripts (which is how we would get Corefig on) and enable bitlocker. Your other concern in getting this VHD ready is drivers. SCVMM will try to use Intelligent Platform Management Interface (IPMI) to do some basic asset management but accuracy and compliance with spec is entirely up to the vendors (which is to say generally terrible).

SCVMM's solution to the driver problem is to add folders full of drivers to the SCVMM Library. Simply copying them into the right directory can do this.

Here you can use tags and filtering to ensure that deployed operating systems only try pulling relevant drivers. This is where vendor support comes in really handy: driver lists can be extensive and a proper enterprise vendor will provide pre-tagged SCVMM-compatible "driver balls" for this purpose.

These drivers are where things got really fun for me. WinPE is not Windows Server. You apparently need to tag your drivers for both environments to ensure that not only do you get the drivers injected into the resultant Hyper-V instance that they get injected into the WinPE installer that you are using to spawn hosts.

To make this all work you need IPMI/ Data Center Manageability Interface or Systems Management Architecture for Server Hardware-equipped servers as targets. Any modern LOM should work, but nobody seems to use the same name.

Take precautions

Naturally to make any of this work you need both a PXE server and Dynamic Host Configuration Protocol (DHCP). SCVMM is pretty good at getting that all going but there are some important gotchas not yet taken care of.

The first thing to note is that running two PXE servers (and their attendant DHCP servers) on the same network segment is often a recipe for disaster. As a general rule this is why separate management and production networks are important: you can have a host PXE server setup on the management NICs and a virtual machine PXE server on the production NICs.

In general, don't deploy Hyper-V VHDs smaller than 10GB. Hyper-V needs at least that to live happily in the wild, even if the wild is your cattle-server factory farm of ultimate doom.

This journey into the heart of server farming has allowed me to understand why treating infrastructure as cattle is so appealing. Certainly at any sort of scale it seems the only logical way to do things.

I remain ambivalent about the idea that this is a one-size-fits-all panacea for our infrastructure ills. I worry that as people who live way up in the clouds deal with this every day they will lose sight of the fact that for smaller organisations the servers as cattle concept is not more efficient; it can be far more work than it is worth.

I know that the vision of the future is that hoi polloi such as me and my clients will be consuming other people's server cattle through the cloud. I have a few problems with that idea that nobody seems willing or able to address.

Still, we should all know how this stuff works even if we don't plan to use it just yet. Ease of use will increase and what is today of questionable benefit downmarket will tomorrow be routine.

The days of pet servers are numbered. The stampede of server cattle is just beginning. ®

More about

TIP US OFF

Send us news


Other stories you might like