Performance anxiety: A different take on 'hybrid infrastructure'
Who're you calling remote?
One commonly thinks, when the word "hybrid" is used, of an infrastructure that combines on-premise (or at least private data centre) and public cloud. But "hybrid" also works in the other direction - across the heterogeneous systems within a particular location.
It is rare for an organisation to base itself entirely on one operating system platform, or on a single vendor for its key applications. So we mix Linux/*nix with Windows, and we mix Microsoft applications with the likes of Salesforce or Oracle, But how do we operate different types of hybridity in an integrated infrastructure, with central directory services and integration that eliminates data duplication?
The Register writes about hybrid infrastructures rather a lot, and we generally mean the same thing: a setup in which part of the kit sits in an on-premises or private data centre location and the rest lives in the public cloud. But what if we think about it differently? The Oxford dictionary defines the adjective hybrid as: “Of mixed character; composed of different elements” – who says a hybrid infrastructure has to include bits of processing that are offloaded to a remote installation that you neither own nor support?
Many moons ago I consulted for a company whose IT manager was a Windows guy through-and-through. And the majority of the technology in the organisation was Windows-based … with the exception of the core Oracle database, which sat on Sun Enterprise kit. The reason was simple: the Oracle DBA wouldn’t touch the Windows version of Oracle with a bargepole as her experience was that it just didn’t perform as well as the Solaris version. As she was an Oracle genius, the argument was compelling.
Back then one would have considered the company’s setup as “heterogeneous”: we had a Solaris world that stood apart from the Windows world, and even within the Solaris database server the operating system and application layers authenticated users separately through their own proprietary mechanisms.
Today, there’s no need to suffer from such a disconnected approach: interoperability between platforms is so much tighter, and so there’s no excuse for not doing it. A hybrid is a single thing that comprises elements of multiple different things, remember – the point is that you should be able to manage it as a single thing.
Of course you shouldn’t set out to have a hybrid setup just for the fun of it: different technologies equals more training and less commonality, and that brings support complexity and added expense. But there are some perfectly sensible reasons for choosing to include more than one vendor or technology in your world.
One is the performance concept I alluded to earlier: in some cases an application fits one platform better than another. Another is security: there is a lot to be said for having your two-level DMZ protected by a different brand of firewall on the inside than protects it on the outside, as it makes you less susceptible to any one vendor’s security bugs. And although less common these days, perhaps one or more of your apps will only work on a particular platform and so you’re stuck with having to support it.
Whatever your reason for going hybrid in this way, the most essential component is the integration between the elements. In the 1990s, and perhaps even the 2000s, you could be forgiven for not having strong interconnections between the systems.
Although you could, say, use Samba to share your Windows file shares with your Unix/Linux world it wasn’t pretty and could be a chore to set up and support (as was Active Directory support for non-Windows systems in the early days). Today, the likes of LDAP make interoperation way easier than it used to be, and there’s no excuse for not doing it if the technology exists to enable you to do so.
This doesn’t mean that integration capabilities are universal and that you can do everything under one roof. Yes, you probably can achieve a single user authentication mechanism for your different platforms, and you can certainly time-sync your various systems from a central NTP source, and there’s a decent chance that you can have a single-vendor anti-malware platform (if you’re happy to put all your malware eggs in one protective basket) and a common backup system.
But a fiver says you can’t do the same with, say, your operating system and application patching regime. Your Windows world will talk in its entirety to a Windows Software Update Services (WSUS) server, giving you a single point of management for Windows updates, but you can’t point (say) a Linux world at it.
In such cases you’ll need to look at the options available. If Red Hat’s Linux is a supported offering then their Satellite service provides you with WSUS-like capabilities; if you head down the route of one of the non-commercial Linuxes, there’s the option of doing it yourself using, say, an internal yum repository.
The point of all this is to note that a hybrid infrastructure – in the new-found sense we’re considering it – isn’t necessarily one single, fully integrated set of stuff. What it should be is as small a variety of systems as you can sensibly implement and work with. Even if you can fudge things to make components sort-of work with each other you probably don’t want to be cause it’ll be a nightmare to support and will most likely be as flaky as a flaky thing. Better to have a slightly larger collection of rock-solid systems than a single, rickety house of cards that’s just looking for an excuse to fall apart.
Enterprise architecture: Get your foundations right
And you must have an enterprise architecture. A single architecture can perfectly easily encompass a number of platforms, because each of them will exist for a reason that is in tune with the overall architecture. And that architecture will be built around policies and design decisions that mandate, say, the underpinning of any newly introduced system by support systems that can back them up, update them, authenticate user logins in a co-ordinated manner, and so on. And it’ll also include rules that ensure newly deployed systems are incorporated correctly not only into the network but also the management layer that governs user creations and deletions, capacity reviews, monitoring, alerting, backups and so on. Components botched in with sticky tape and string are merely an invitation for system crashes and, particularly, security concerns.
So regardless of whether you’re going hybrid in the sense of combining cloud-based and private systems, or whether you’re doing it in the multi-platform sense we’ve been talking about here, the one thing that matters is a properly designed, manageable and workable architecture.
Time to design yourself one if you don’t have one already.