Hyperconverged infrastructure. It's all about the services
If HCI sounds simple, that’s because it is
Sysadmin blog Hyperconverged Infrastructure (HCI) isn't a product, it's a feature. The future lies in turnkey cloud solutions. This means that there are certain IT services HCI vendors need to bring to the table to remain relevant.
At its most basic, HCI is virtualization + storage. You take hard drives, put them into servers and put a hypervisor on top. Something, either integrated into the hypervisor or operating in a VM with pass-through resources, connects up the different nodes in the virtualization cluster to one another and provides a shared storage resource by lashing all the disks together. Ta-da: HCI.
If HCI sounds simple, that’s because it is. Oh, this was black magic a decade ago, but everyone and their dog does it now.
Today's new hotness is all about self-service portals and cloudy interfaces, but even that isn't particularly sexy. There are an uncountable number of vendors out there - from startup to tech titan - offering a self service UI that can boss hypervisors around and behave in a multi-tenant fashion.
Any HCI vendor that hasn't at least started to integrate a cloud portal into its product is beyond hope. If all you've got to sell is HCI, then your company is already dead.
The storage array of the now
Three data center services that today's HCI solutions should be delivering, and tomorrow's on-premises cloud solutions simply must deliver are: SMB file sharing, NFS, and iSCSI. Considering that one of the purposes of HCI was to finally rid us of the scourge of our expensive SAN and NAS overlords, this might seem a little strange. The truth is, we might never escape the need to provide these services.
Windows is going to keep on storing profiles and folder redirections on SMB until the bitter end. Unless and until the milled masses wise up to the part where Microsoft can't be trusted, we'll just have to keep offering that up for Windows to consume.
Similarly, NFS is just how things get done in the rest of the IT world. Linux, BSD, OSX, you name it. When a resource needs to be shared between nodes, it's NFS to the rescue.
iSCSI needs to exist because bare metal never dies, nor does it even really fade away. There will always be some esoteric workload somewhere the just needs to run on bare metal, or is for whatever reason allergic to a hypervisor. Storage has to be provisioned to that workload, and what's the point in buying HCI to run your virtual workloads if you also need to buy SANs and NASes to handle file storage and corner case workloads?
HCI is a scale out storage array that you can run workloads on. So those HCI vendors not offering traditional storage services need to grow up and get these features out there or get run over by those vendors aware of their place in things.
Virtualization doesn't just mean x86 hypervisors. Containers may only be virtualizing the userspace, but containers are virtualization too. HCI is about marrying scale out storage with the ability to manage and run workloads, so why restrict those workloads only to x86 hypervisors? Containers have a lot to offer and they're going to play a big part in reclaiming efficiency in the next generation of turnkey clouds. Best to get a head start now and incorporate support while it’s still something of a novelty.
Another area of post-hypervisor virtualization is bare metal virtualization. This isn't a reiteration of containers or even provisioning block storage to bare metal servers. Instead, HCI can be integrated into operating systems installed on bare-metal to enable clustering for cluster-aware applications.
What's important here is that HCI solutions tend to be able to send snapshots from one cluster to another, or to a receiver located at a service provider or in the public cloud. If all storage within a data center can be folded into the HCI tent then all of it can be snapshot regularly to a backup cluster, sent offsite for DR purposes and more.
No more trying to get incompatible arrays to work together or relying on primitive operating system-level backup agents. As we migrate into enterprise cloud solutions this sort of thing needs to be automatically configured based on policy for every workload provisioned, regardless of what the underlying workload management system happens to be.
With total control over an organization's data, HCI providers are going to need to step up their security game. Data tagging, data locality and data sovereignty should be top of mind.
Systems administrators and customers need to be able to flag workloads and their associated data according to specific criteria. These could include, for example, regulatory compliant storage located only in certain geographical regions or with specific compliant third-party partners.
Automated widgets that peer into files, LUNs, databases and VMs to look for specific data patterns are important too. Flagging credit card and other Personally Identifiable Information (PII) automatically would be very useful. Where is this data? How much of it is on non-compliant storage? Can we move it all to compliant storage automatically, or better yet, prevent it from ever being put on non-compliant storage in the first place?
Network security and especially Automated incident response should probably be included as well. HCI solutions will form the core of turnkey enterprise clouds and they will own all the networking traffic for those workloads. Combined with the ability to track data, HCI vendors looking for a competitive edge could be looking not only for basic signs of network intrusion, but logging access for audits and even flagging workload creation or migration that looks a little off.
Hyperconvergence is a feature. Just like all the other features discussed that need to eventually be welded onto the side. The product is a turnkey cloud. Those not delivering are doomed. ®
Sponsored: What next after Netezza?