Past, present and future: A year in hyperconvergence

Infrastructure, acquisitions and retrenchments

Supported It has been an interesting year in the world of hyper-converged infrastructure (HCI) with acquisitions and retrenchments and with new products released.

  1. Systems giant HPE kicked off 2017 by acquiring sexy startup SimpliVity for $650m
  2. Cisco bought its HCI SW supplier Springpath for $320m
  3. Netapp announced its SoldFire-based entry into the HCI market
  4. Both HyperGrid and Atlantis retrenched

Hyper-converged systems came into being because of the fact that traditional IT was becoming more and more complex, with separate resource silos for procuring, operating, maintaining and managing servers, shared external storage, networking and system software. If these elements could be combined in single, virtualised and scale-out systems with single pane of glass management then on-premises IT life would become easier.

It was startups such as SimpliVity that encapsulated and drove this idea, and customers liked what they saw. HPE bought SimpliVity in January 2017, taking into its fold one of the leading players in this emerging field. Before that deal, though, HPE had bought Left Hand and its virtual SAN software.

The groundwork has clearly been laid that should see both HPE’s position and its products come to maturity in the future, thereby helping clarify the very concept of HCI itself.

HPE is one of the four mainstream enterprise suppliers of HCI. Others include Dell EMC, Nutanix, and Cisco.

Leading these is HPE. Its SimpliVity 380 product is x86-based and has a proprietary ASIC used for deduplication work. This feature has brought the ability to offload that processing from the x86 CPUs, freeing them to run more virtual machines or containers. There are more than 2,000 SimpliVity customers set to gain from this feature.

HPE also has a pure COTS-based HCI system, the HC 250, historically based on Apollo 2000 HPC hardware.

How does that relate to the SimpliVity 380? According to Stuart Gilks, HPE technical architect, hyperconverged and hybrid cloud: "The HC250 remains a current product offering hyper-converged infrastructure in a compute-dense form factor (4 nodes in 2U) whereas the SimpliVity 380 is currently only available in the 2U chassis being an appliance based on the [ProLiant] DL380 (gen9 and gen10).

"[The] HC250 remains the likely preferred choice for customers extending an existing HC250 infrastructure where maximum compute-density of HCI nodes is required, and SimpliVity 380 (optionally with dense compute nodes) is not as appealing," he told us.

That reflects the HPC background of the Apollo 2000.

Private and public cloud

HCI systems operate on-premises, and that on-premises IT world is going hybrid and becoming cloud-like in terms of its provisioning, operation, management and consumption. That means HCI systems have to operate in a cloud-style world. These systems cannot ignore cloud ways of operating and provisioning and consuming IT.

To that point HPE is offering GreenLake Flex Capacity systems that it supplies to customers as on-premises equipment. HPE operates and manages it, scaling resources up or down, with customers paying for use with metered billing. HPE SimpliVity is included as a GreenLake Flex Capacity HW/SW platform.

It will also be a manageable system in HPE's OneSphere private, hybrid and public cloud management and its operational system for developers, line-of-business and IT operational staff.

Of course there are competitors and these rivals have their own approaches to HCI and these competitors include Nutanix and Dell-EMC/VMware.

Nutanix has its KVM-based Acropolis hypervisor and supplies vSphere-based product. It sees HCI hardware merely as a platform on which to run cloud-style software for provisioning and running applications and their stacks - be they virtualised or containerised. Alas, here, you end up relying on other people’s hardware - Cisco, Dell EMC, IBM, Lenovo and others.

One of Dell’s biggest businesses is servers and the Dell EMC combination sees HCI systems as vehicles for supplying more of those Dell servers - with VMware and Dell-EMC software, such as ScaleIO. Dell also OEMs Nutanix as the Dell XC alongside its main VxRail and VxRack systems.

Also in the field is Cisco with HyperFlex using systems based on its UCS servers and the Springpath software that it bought in August this year.

The generic HCI system is completely based on commercial-off-the-shelf (COTS) hardware; basically X86 processor servers and a virtual SAN constructed from direct-attached SSDS and disk drives.

IBM has a POWER processor-based HCI - offering using Nutanix SW, but this is not a mainstream HCI product.

A pair of hopeful enterprise HCI suppliers, though less well established, are Datrium and NetApp. Here you take a chance, though, as the components of both these providers are less-well integrated under the skin than systems from HPE.

The Future

So there has been a lot of momentum in HCI, but what comes next? Two potential paths are opening up for HCI - neither exclusive of the other.

The big picture to this, of course, is the HCI market is relatively untapped and potential customers are as-yet unaware of the benefits of hyper-converged systems. This is not a challenge, though, as fundamentally customers want to have access to applications to help run their business.

Of necessity, these applications run on standards-based, x86 hardware and open software platforms. Ideally, these should be as straightforward to operate as a public cloud, with instant spin-up and spin-down and with metered billing to help customers monitor and maintain costs and operate the new infrastructure as a chargeable IT service. Systems should have the option to be on-premises where necessary, in order to satisfy data protection rules such as the EU’s forthcoming General Data Protection Regulation (GDPR) in 2018 and other compliance rules. There should also be the ability to move workloads around for business and IT flexibility. This should be achieved using virtual machines or containers, moving workloads between on-premises and public cloud locations as part of a hybrid architecture.

The whole system should, ideally, be managed through one pane of glass as well, a software-defined data centre.

With that in mind, what are those future paths?

One path for future HCI development applies very much to the on-premises world. This path is towards composable infrastructure, with compute, networking and storage resource (file, block object) being lumps of resource that are composed as needed into the right-sized platform for an application, operate as needed, and then are returned to the resource pool when needed.

You don't have fixed size, scale-out nodes, instead carving out what you need from a shared resource pool. HPE's Synergy system exploits this idea.

The other development strand is that application workloads, in containers or VMs or both, are instantiated as needed in a cloud environment, be the hybrid world of private and public cloud, with a kind of underlying invisible infrastructure.

There is a third path of course: if HPE's Synergy system at some point in the future includes public cloud instantiation of resources as well as its current on-premises instantiation. In this case, then the two trends will come together.

Supported by: HPE

Sponsored: Learn how to transform your data into a strategic asset for your business by using the cloud to accelerate innovation with NetApp


Biting the hand that feeds IT © 1998–2018