How HCI simplifies the data center
Getting IT out of everyone's way
Sysadmin blog Organizations look at the cloud as an option because it was impossible to get their data centers to operate at the same efficiency. Now if you flip it over and get the data centers to behave the same way as the cloud, how would that change things?
Companies are already on board with server consolidation and virtualization – but what about those next steps in improving IT infrastructure and application services, e.g. more uniformity, automation, HCI, utility-style ease of use.
For instance, hyperconverged infrastructure (HCI) abstracts away many issues of complexity that traditional data centers face. By converging and integrating server, storage and networking with server virtualization, HCI enables fewer “moving parts” to manage. Combine that with modern management software and the result is much easier planning and provisioning. The outcome is a radically simpler environment that in terms of ease of management should be little or no different from the public cloud.
Operations teams responsible for traditional IT environments tend to look at the public cloud differently than others do. This should come as no surprise; a mechanic is likely to look at cars a little differently than those with no interest in how cars work. The existence of public clouds has, however, changed expectations of what on-premises IT should be delivering, setting up a battle over how these expectations should be met.
When an infrastructure nerd looks at a public cloud they see tens or hundreds of thousands of individual hosts and millions of workloads all acting in concert. Public cloud vendors can and do buy their equipment, software and physical plant at such vast scale that even the largest of large enterprises are humbled. Nerds, like beancounters, gaze upon the efficiencies scale in awe.
Users see none of this. Users see a retail outlet for IT where they can enter a credit card and get whatever they want within moments. From a simple VM with a default operating install to a complex Software as a Service offering, the whole point of the cloud is self-service. Users gaze upon the efficiency of a lack of red tape and are equally in awe.
Unlike beancounters or users, the nerds should have the skills to look deeper. Beyond the marketing hype and the headline figures lies the dark truth about clouds: they aren't anywhere near as efficient as they seem. Despite this, the self-service offered is supremely efficient for end users, leading to a dilemma for those charged with doing it all, but always with an eye to budget.
The inefficiency of clouds
What we're not supposed to talk about with clouds is that at the core of it all they are still designed to cater to people. People are inefficient. They are paranoid, or uninformed. People are over or under enthusiastic or simply not paying attention. People make mistakes.
In a traditional IT environment one architect lords over the whole thing. At worst, there is a smallish group that must meld minds to make decisions. Data center administrators responsible for the data center as a whole can do things like oversubscribe resources. This keeps utilization of various bits of infrastructure high, but hopefully not so high as to impair performance.
Data center nerds have access to generations of monitoring and predictive analytics packages to help them understand just exactly how much they can overprovision things, when they have to buy new gear and more. They don't always get it right, and sometimes adding more means disruptive forklift upgrades, but with fewer people making mistakes about resources traditional on-premises administrators can fly a pretty close to the sun without getting burned too often.
When you operate a cloud you give up much of that control. In the on-premises and service-provider cloud worlds overprovisioning is often not even built into the software. If a customer creates a virtual data center with 32 Virtual CPUs (vCPUs) then 32 vCPUs are dedicated to that customer. At best you can get away with calling a thread a vCPU instead of dedicating a physical core, but you're highly unlikely to be able to assign 32 vCPUs on a system where only 16 threads exist.
Paying for what you provision
To the customer, this is great. It means they get what they pay for, and it's always available. To the efficiency-obsessed nerd, however, this can be the cause of much sadness. Resource utilization can often go down after deploying a cloud versus having the same number of workloads lorded over by a traditional IT team.
The reason? Customers don't know what they need. They provision what they think they need, not what they'll actually use. If you're Amazon, this is great news. If you're deploying a cloud internally, it could be a huge problem.
Most times, Amazon gets paid based on what you provision, not what you use (for certain kinds of instances, they also charge based on usage). If you want to provision 400 vCPUs and 30 TB of RAM and then not actually do anything with it, Amazon is happy. You're paying them to keep servers lit, but you're not pulling peak electricity nor pushing the cooling systems as hard as you would be if you actually used those resources. Win-win for Amazon.
For internal IT teams this is less than ideal. Square footage costs money. So do switch ports and all the other bits and bobs. It's real money to connect, power, cool, insure and otherwise care for servers. After years of showing easy wins with server consolidation via x86 virtualization cloud computing doesn't promise easy bookkeeper wins.
People are always the problem
What IT teams need to learn is that it's perfectly okay not to drive the needles into the red. We've spent 20 years grinding down the cost of workloads with virtualization, driving hardware vendors to bankruptcy, layering on management tools and aggressively pursuing automation. Right now, today, the bottleneck that needs to be optimized is the nerds themselves.
Every minute that passes between a request for a workload to be stood up and that workload being provisioned to the customer is inefficiency. In a traditional IT world provision requests have to go form specialist to specialist, passing through multiple silos, change management requests, testing and so forth before the goods are finally delivered. In a cloudy world a button is pushed, scripts act on the request, everything is logged and analysis dealt with in aggregate at regular reporting intervals.
Ultimately, the inefficiency problem of having multiple people assign 2 vCPUs when one will do, or 8GB of RAM when they only need 2GB is rather trivial. In a generation or two, cloud software will be a lot more application aware and be able to automatically resize resources as needed. Primitive versions of this are already out there.
Getting IT out of everyone's way, however, is unlikely to be that simple.
And thus the stage is set. Those who aren't nerds want that which makes their lives easier; efficiency in their daily lives and in doing the jobs they've been tasked to do. To meet these needs IT is going to have to stop obsessing over each individual widget in the data center and start working higher up the stack; delivering templates, recipes, integrations and self-service options that end users demand.
In order to free up nerds to focus on application-level issues infrastructure is going to have to be simplified, automated and made functionally invisible. This means embracing clouds in the enterprise. If that brings some hardware utilization inefficiency along with it, it's a price that will just have to be paid. ®
Sponsored: Becoming a Pragmatic Security Leader