Orchestration and the server environment
Just pie in the sky?
Workshop Few words in the IT industry’s vocabulary are more grandiose than ‘orchestration’, evoking images of symphonic movements, rows of groomed musicians and wild-haired, baton-pointing conductors. Just how the term came to be used for the allocation of server resources must leave IT managers more than a little flummoxed, however.
It’s not that things are static in the server room – far from it, as new and changing requirements keep stress levels hovering just a fraction above what might be considered healthy. But orchestration and all it suggests – the provisioning, movement and re-allocation of resources as and when necessary, presumably requiring no more than a flick of the coat tails and a poke of the baton – remains somewhere in the distance for most organisations.
The term itself may have been in use for decades, but it really only came into vogue when x86 servers first started to be considered as appropriate replacements for what had previously been more proprietary hardware. It seems almost a lifetime ago when you had to choose a make and model of server from one of the big hardware vendors of the time – IBM, Sun, HP and so on (go back a bit further and we have DEC, Tandem and all of the rest). But then ‘industry standard servers’ came into vogue, first in rack-mounted, then in blade form.
It was probably this latter wave – coinciding as it did with the dot-bomb and subsequent drive towards consolidation – that triggered ideas around orchestration. Start-ups such as ThinkDynamics (quickly snapped up by IBM) proudly boasted how they could configure a server in just a few minutes, using a pre-defined template (and no doubt, a few scripts running behind the scenes). It all sounded great – particularly when put against the familiar challenge of server allocation taking days, if not weeks.
Orchestration promised – and indeed continues to promise – so many great things, not just in terms of rendering IT operations more efficient, and service delivery more effective, but also enabling greater visibility on the server environment. Building on top this was – and is – the idea of chargeback: if the IT manager is sufficiently au fait with who is using what, this offers the opportunity to at least tell different parts of the business how much its IT is costing, even if no money actually changes hands.
But here we are, about to start a new decade and this brave, new world of server provisioning, resource allocation on the fly, charging back of IT costs remains somewhere in the dim and distant future for the majority of IT departments.
We want to know why you think this is – do you put it down to the fact that it was only a mirage in the first place, a feature of marchitecture rather than architecture? Or perhaps we’re just not quite there yet, and some technological pieces (perhaps beginning with the letter ‘v’) still need to be in place before it can happen. Maybe the problems aren’t in technology at all, but lie more in the politics of your own organisation, silo-based mentalities, systems ownership culture and resistance to change.
Whatever is the case, we’d be interested in your views.
Citrix provisioning server
I've used Citrix Provisioning Server (part of the XenDesktop suite) to boot servers from a single central image. This gives great agility when provisioning, and re-provisioning servers to particular roles.
It allows the hardware to be seen more as a resource, rather than dedicated "web server" or "file server". It takes IT deps a while to get their heads around this concept that a piece of tin can change its role as fast as it takes to reboot, but once they realise the agility this gives them - especially with rapid upgrades, easy rollback etc they generally quickly come on site.
This provisioning technology of course works best when you've virtualised your servers as it removes the hardware dependacy in the images but can be applied to "legacy tin" to achieve similar benefits.
Orchestration Exists and Works
Orchestration or Automation in a server environment exists and is more prevalent than you might think. Typically an organization requires a mature individual that understands the benefits and value of orchestration. Secondly, the organization requires a shift in thinking away from the manual processes that are so embedded in day to day activities. The second of the two is often harder to evangelize and spread throughout an IT shop. At first, orchestration technology may seem to make things take longer, but as an organization matures in this space, and orchestration becomes more second nature to the IT staff, the benefits become readily apparent.
Orchestration and virtualisation
The concept of “network orchestration” is one of rapidly moving around workloads, or deploying a system from a template in minutes. If you are running any good virtualisation stack and have paid for management tools with all the blue crystals, then you know this is both easy and doable today if you so wish. (You can do it on metal with ghost and its compatriots, but it's slightly messier.)
The first issue to mind is cost. You have to pay not only for your virtualisation management tools, but for all the blue crystals necessary to really make them shine. You have to reinforce your backend infrastructure. Those VM templates have to live somewhere, and that takes storage. Your network has to be fast enough to handle the demands placed on it. You might even have to upgrade from your current setup to something involving the acronym "SAN" before you can play in this magical fairy world of high and rapid availability where you can "orchestrate" workloads on your network. (This is something that is only now becoming a realistic option for smaller shops.)
The second issue, and perhaps a bigger one, is that the concept or "orchestrating" a network of servers implies that the sum total of systems administration is really nothing more than provisioning. Provisioning of servers has indeed become easy, but you still have to the legwork of honest-to-goodness R&D. Someone has to make those templates. You have to patch test, version test, regression test, check, re-check and do it all over again. Somewhere someone has to be constantly testing the network to see how long deploying a template will take, or how long migrating a workload from one node to the other will take. Not all workloads are feasible to hot-move, and scenarios must be drawn up to handle this. Somewhere disaster planning and documentation all have to be done, all of which is part of the performance, but occurs "behind the scenes."
The business sees provisioning. They request a server, and moments later they have one working. In this sense server provisioning has almost become as easy as desktop provisioning. The operating environments are just spawns of some master copy somewhere and you move on.
It is dangerous to use words like "orchestrating" because it's far too simple for others in the business to forget the amount of practice an orchestra requires.
How To Build A Single Point Of Failure
I first ran across the term "orchestration" being applied to IT in the mid-1980's, when IBM was trying to sell their concept of requestor/servor systems. The notion was that you had token ring networks of OS/2 PCs going through a TPF "orchestration layer" to a mix of IMS databases.
The design broke down in much the same way that all later attempts (eg 'web services" have. The orchestrator has to hold all context for a transaction; but cannot maintain simulataneous locks across across the various database systems. You can choose between creating a rich minefield of deadly embrace, or resign yourself to a large number of unresolved transactions.
What generally happens is fake orchestration, where specialized code is built to handle known issues with cross servor transactions at the orchestration layer, but each data base system operates it's function in isolation. If you are willing to abandon concurrency and reliabilty, you can give the appearance of orchestration.