Original URL: https://www.theregister.com/2013/07/05/cloud_autodeploy/

Microsoft's cloud leaves manual transmission behind

Get to grips with autodeploy

By Trevor Pott and Iain Thomson

Posted in Systems, 5th July 2013 12:38 GMT

When you write technology blogs for a living you end up sitting through a lot of WebExes, watching a lot of training videos and going to a lot of conferences.

A growing trend that emerges from all these presentations is the importance of autodeployment, something that has far bigger implications than a mere installation method for our operating systems.

There is a comparison I cannot get out of my head: the difference between pets and cattle. The reference was eventually traced back to Microsoft man Bill Baker and it perfectly describes how today's top technology vendors are approaching IT.

What drove this home for me was a VMware video I was perusing in the hopes of figuring out how some widget or other worked.

At one point the presenter asked the audience: "I trust everyone here is using autodeployment?" All 500 people murmured their assent. The presenter continued: "Good. I can't imagine why anyone would use anything else."

This sums up the view of every major technology company, working group, standards body and what-have-you in our industry. We are living in an age of software-defined everything.

IPv6 is based on the concept that you have a series of bulletproof services on your network to make everything work in a dynamic fashion. We build entire private clouds with autodeployed widgetry and profiles of various sorts. Our servers, switches, storage and everything else are defined by dynamic configurations and are utterly interdependent.

The Puppet guys seem to be absolutely in tune with the future here: our infrastructure is code.

Shoot the servers

One of the slides in that pets versus cattle article discusses the differences between the old and the new.

The old way is to treat our infrastructure as pets: you name your equipment and when it gets sick you nurse it back to health. The new way is to treat your infrastructure as cattle: you number your equipment and when it gets sick you shoot it. I am not entirely comfortable with this.

Servers are pets in the SME world I inhabit simply because we don't have the resources to shoot one when it becomes inconvenient. For my clients every software licence matters, ever dollar spent must be accounted for.

At a high level the pets versus cattle debate makes perfect business sense. But the concept has become so pervasive that most of these vendors take the same approach to end-users, with SMEs biting the fateful bullet.

The rebellious part of me that gets all uppity and writes articles about trustworthy computing or questions the legality of cloud computing wants to stand up to this trend and make loud noises until I convince the world to change.

The pragmatic part of me says that I must learn the bits underpinning what appears to be the unstoppable future of computing if I want to be able to pay the mortgage and buy food for my fish.

Modern machinery

I have used forms of autodeployment for ages. I think it would be safe to say that most sysadmins reading this article have as well. The concept is generally synonymous with Preboot eXecution Environment (PXE) servers.

I have used it for everything from keeping my Wyse clients updated to deploying CentOS to more than 5,000 nodes in a render farm to my trusty old Ghostcast servers from the beforetime. I have even been a victim of WDS once or twice.

In an effort to learn modern server farming techniques I have taken a poke at Microsoft's System Center Virtual Machine Manager (SCVMM) and the autodeployment features it has for building a Hyper-V factory farm of interchangeable server cattle. Assuming you have the infrastructure set up properly then farming servers with SCVMM can be pretty easy.

In SCVMM go Fabric --> Add resources --> Hyper-v clusters --> Physical computers to be provisioned as a hyper-v host. Add a run-as account (a domain admin in most environments, but you can do delegation-based security stuff if you care to.) Select protocol --> [next], enter IP address of target cattleserver --> [next] and if you get to the next screen, you have successfully taken control of the target server.

Now that you "own" the target server, select a host group and host profile (something you had to have preconfigured and which contains the operating system image you want to deploy) and hit [next]. You will get a final screen where you can customise host name, logical networks, IP addresses and so on.

After that, hey presto, you have branded your servercattle with Hyper-V. There is a far more detailed walkthrough on that process here.

Of course this can go horribly sideways if the IMPI SMBiosGUIDs presented as part of discovery by the Lights Out Management (LOM) card and that of the target server itself do not match.

This is common in older HP servers, but known to happen in some Dells as well. Fortunately, neither my Supermicro FatTwin nor the miniserver has given me grief, but that doesn't rule out issues elsewhere in Supermicro's range.

Mikael Nystrom has a truly excellent presentation on SCVMM's autodeployment with a lot of focus on the SMBiosGUID issue. It is lengthy (over an hour) but contains the PowerShell I needed to get the old HP servers to behave.

Nothing is simple

While the deploying to the metal part seems straightforward enough, getting your image ready is an entirely different story. There are two routes you can choose, and despite what the entire rest of the internet will tell you, each of them has its place.

The goal is to create a virtual hard disk (VHD) that you can deploy out to the various servers. Remember, Windows can boot from VHD now so hard drive images in a Microsoft world are all VHDs.

The quick-and-dirty way to do this is to install Windows on a server identical to the class of server you want to deploy to, get it exactly the way you want, sysprep it, use SCVMM to grab the image and then use that to create your hosts.

This is very 1990s Ghostcast server, with the little twist that SCVMM can get inside the image during deployment and do things such as rename the target host, join it to the domain and configure networking.

I call it the "pets 2.0” approach because it is the optimal use of time for SMEs that deploy servers only once every few years and don’t mind taking a few hours out to patch a new host on the rare occasion where a hardware failure requires something to be reimaged.

Pets 2.0 will, of course, open up the systems administrator to relentless mocking by the software-defined-cattle-ranchers-as-a-service DevOpers of tomorrow. They live in a world at scale where this is absolute foolishness for any number of reasons. Their approach is clean installs and answer files, the stuff of post-traumatic stress disorder flashbacks for my fellow victims of WDS.

You can add applications that will install after the operating system is deployed

To be cattle compliant in our automated Microsoft future you need to download the free Microsoft Deployment Toolkit, which is explicitly designed for creating reference images.

In addition to the operating system you can add applications that will install after the operating system is deployed. Naturally these have to be packaged properly (MSI) by the vendor, otherwise it is post-install scripting for you.

At least with a Hyper-V system you shouldn't need to install too many applications that are not capable of automated installs. Sadly, corefig doesn't have an MSI installer.

The Microsoft Deployment Toolkit used to create the cattle-compliant VHD also allows you to define roles and features to be installed post install. It can install the applications you have added at various points in the install: pre-Windows update, post-Windows update and it can even trigger a second-round Windows update after that.

After that has all been all dealt with you can run custom scripts (which is how we would get Corefig on) and enable bitlocker. Your other concern in getting this VHD ready is drivers. SCVMM will try to use Intelligent Platform Management Interface (IPMI) to do some basic asset management but accuracy and compliance with spec is entirely up to the vendors (which is to say generally terrible).

SCVMM's solution to the driver problem is to add folders full of drivers to the SCVMM Library. Simply copying them into the right directory can do this.

Here you can use tags and filtering to ensure that deployed operating systems only try pulling relevant drivers. This is where vendor support comes in really handy: driver lists can be extensive and a proper enterprise vendor will provide pre-tagged SCVMM-compatible "driver balls" for this purpose.

These drivers are where things got really fun for me. WinPE is not Windows Server. You apparently need to tag your drivers for both environments to ensure that not only do you get the drivers injected into the resultant Hyper-V instance that they get injected into the WinPE installer that you are using to spawn hosts.

To make this all work you need IPMI/ Data Center Manageability Interface or Systems Management Architecture for Server Hardware-equipped servers as targets. Any modern LOM should work, but nobody seems to use the same name.

Take precautions

Naturally to make any of this work you need both a PXE server and Dynamic Host Configuration Protocol (DHCP). SCVMM is pretty good at getting that all going but there are some important gotchas not yet taken care of.

The first thing to note is that running two PXE servers (and their attendant DHCP servers) on the same network segment is often a recipe for disaster. As a general rule this is why separate management and production networks are important: you can have a host PXE server setup on the management NICs and a virtual machine PXE server on the production NICs.

In general, don't deploy Hyper-V VHDs smaller than 10GB. Hyper-V needs at least that to live happily in the wild, even if the wild is your cattle-server factory farm of ultimate doom.

This journey into the heart of server farming has allowed me to understand why treating infrastructure as cattle is so appealing. Certainly at any sort of scale it seems the only logical way to do things.

I remain ambivalent about the idea that this is a one-size-fits-all panacea for our infrastructure ills. I worry that as people who live way up in the clouds deal with this every day they will lose sight of the fact that for smaller organisations the servers as cattle concept is not more efficient; it can be far more work than it is worth.

I know that the vision of the future is that hoi polloi such as me and my clients will be consuming other people's server cattle through the cloud. I have a few problems with that idea that nobody seems willing or able to address.

Still, we should all know how this stuff works even if we don't plan to use it just yet. Ease of use will increase and what is today of questionable benefit downmarket will tomorrow be routine.

The days of pet servers are numbered. The stampede of server cattle is just beginning. ®