This article is more than 1 year old

What hyper-converged storage really means for you

Anticipate the pendulum and avoid the pitfalls

To paraphrase an old joke, ask three IT “experts” for a definition of hyper-convergence and you'll get four different answers - depending on which areas they work in and what their employers are currently trying to sell.

We are going to simplify things though by looking at it through the prism of storage. This turns out to be a remarkably useful viewpoint to take, because storage and how you collate and manage it turns out to be crucial to pretty much every vision of hyper-convergence.

That is because hyper-convergence is one of the first places where software-defined storage (SDS) is becoming a reality, and more importantly, where storage gets both automated and integrated with the compute and networking elements, all under a single management layer. One of the key underpinnings here is storage virtualisation, an expression of SDS that has been kicking around the open systems space for at least a couple of decades now, and in mainframes for a lot longer than that.

At its simplest, storage virtualisation abstracts all the available physical storage – whether inside servers or in separate storage subsystems – into generic virtualised blocks. These are added to a shared pool which can then be carved up into new logical volumes, all managed by a distributed storage controller. A logical volume could draw its individual blocks from multiple physical devices, plus the controller supports multiple storage classes, and therefore tiers. It may also be able to replicate, mirror, snapshot and migrate data in the background, invisibly to the host file systems, never mind the applications.

“The storage virtualisation layer is what makes hyper-convergence data centre-friendly,” explains Everett Dolgner, director of storage and replication product management for WAN specialist Silver Peak. “[Hyper-convergence] puts all the storage back in the server, but without an abstraction layer that's a problem – there's a reason we took it out of the servers and invented SANs in the first place!”

He continues, “If you look at the likes of [hyper-convergence start-ups] Nutanix and Simplivity, they're storage companies at core, with clustered file systems and storage virtualisation, so they can virtualise the internal disk and share it back. But they are also adding a management layer that is easier to use. The other value in hyper-convergence is that it has forced the entire industry to innovate, so we're seeing the large storage vendors come out with similar solutions.”

Wrapping it in tin

Nutanix and Simplivity are also two of the companies that have taken the opportunity to package all of this as an appliance based on commodity off-the-shelf (COTS) x86 hardware. Others taking this route include server manufacturers reselling VMware's EVO:RAIL hyper-converged virtual infrastructure.

“In some circles, hyper-converged systems surface as an underground, counter-SAN movement, led by server-centric proponents, unwilling to relinquish control to centralized storage administrators,” says Augie Gonzalez, product marketing director at software-defined storage developer DataCore. “In others, such as remote and branch offices, hyper-convergence arises from the practical need to collapse compute, storage and networking into the smallest footprint, at the lowest cost, yet with enough redundancy to achieve high availability.”

HP's storage marketing veep Craig Nunes adds that hyper-converged appliances can be a useful way to cut costs and escape the complexity of having multiple overlapping acquisition and upgrade cycles within IT.

“When we talk to CIOs and IT VPs, so much of the conversation is about flash and service level workloads,” he says. “But if we flip it and ask 'Aren't there workloads you want to cost-optimise?' they say 'Of course, most of our workloads we want to cost-optimise!' So you have SDS and industry-standard hardware.”

He continues, “The other dynamic is how people want to deploy hardware. It gives you building blocks with a fixed ratio of compute to storage, and alongside that you can deploy independent compute and storage. For example our CS200, based on StorVirtual is automated out of the box, with everything preinstalled. It's great for people who are trying to move quickly or have staff constraints, and don't want to deal with the complexity of configuration.”

Packaged COTS hardware also simplifies the deployment of what is after all a highly complex virtualised infrastructure, says Jan Ursi, senior director for channel sales and marketing at Nutanix EMEA.

“Today's converged solutions are presented as appliances, but that's not necessary,” he explains. “Our software can run on any of the hypervisors we support, whether that's VMware, KVM or Hyper-V, so we could be software-only one day. But to guarantee the user experience we decided for now to run it only on our own Supermicro-built servers or on Dell.

“The disks are no different from others – we use all the usual suppliers, but we test them for the IOPS we need, and we test that the controller is high-performance, and so on. The storage resources are then presented to the hypervisor as an NFS target, or as SMB if you're using Hyper-V, which doesn't understand NFS.

“Our software is a software-defined storage controller in its own right and runs in the hypervisor on every physical server. It has direct access to the local storage resources in the machine – it can cut through. If the local storage is not enough, it can talk to other Nutanix machines for more.”

Storage automation

The key is that it must be easy to manage and it must integrate and automate all the common services, suggests Nunes. "With hyper-convergence, look at the management user-experience and the storage services on offer," he says.

So what does this mean for storage and server techs, as well as for the business? The essential part is that the storage management must be automated, because without automation you can't consolidate compute and storage and run them as a single system where you spin up a VM and the storage just happens. Of course, this automation is already happening in cloud services, and indeed you could visualise a hyper-converged appliance as a private cloud-in-a-box.

“What makes hyper-convergence fundamentally different is that it makes the storage invisible to people running VMs on top,” Ursi says. “Traditional storage had to be hooked up to servers and managed separately, creating LUNs, zones, masks and so on. With hyper-convergence, the VMs use internal hooks via software that presents the storage as an automated service.

“It is definitely going back to consolidation again, making sure you integrate as many moving parts as possible,” he adds. “Things have become too isolated, for example storage, servers and networks all have their own technical specialists, but they are all essential parts of the same service.”

“Hyper-converged is a necessary foundation for the cloud,” agrees Doug Hazelman, product strategy veep at virtualisation specialist Veeam Software. “Without automated provisioning and tools for analysing usage, are you just converging for the sake of it?”

Next page: Notes of caution

More about

TIP US OFF

Send us news


Other stories you might like