This article is more than 1 year old

Software-defined traditional arrays could be left stranded by HCI

Virtual SAN get-out for some ... believe the hype

Comment A rising tide lifts all boats and the Nutanix IPO signals that all the hyper-converged infrastructure product boats are going to get a lift. Where does that leave software-defined storage (SDS) – stranded on a mudbank?

If your business is providing cheap storage arrays through having customers run your array, controlling software on commodity controllers and JBODs, then the rising adoption of hyper-converged infrastructure is not good news.

There have basically been four basic ways of doing software-defined storage and that particular way looks threatened:

  1. Pure file and/or block storage array software needing commodity controller HW and JBODs, like Nexenta,
  2. Object storage software needing linked servers with direct-attached storage,
  3. Legacy storage array controlled by software <-- the incumbent array vendor marketing obfuscation approach,
  4. Virtual SAN needing linked servers with direct-attached storage (DAS).

A fifth way is now current: providing software-defined storage as part of a software-defined hyper-converged infrastructure product – think Maxta for example.

What exactly are we talking about here?

Hyper-converged infrastructure (HCI) systems combine servers controlled by or running hypervisors converged with storage and networking. These are simpler to buy and scale out than separately buying server, hypervisor software, storage and networking gear. Most early ones used virtual SAN software, such as HPE's LeftHand Networks-based StoreVirtual, and VMware took this tack with VSAN.

For software-defined storage vendors like DataCore this represents an opportunity as it uses virtual SAN technology already, and can step into hyper-converged infrastructure systems by allying with server vendors.

For legacy storage array incumbents HCI represented a threat, which has been responded to by development of their own HCI products – think Dell-EMC and HPE for example – or by intending to do so, like NetApp.

For object storage vendors, HCI is primarily about storing performance data, not secondary data, the prime target for object storage. However, since object storage hardware is basically a set of linked servers with DAS and networking, then the hardware underpinnings are there if anybody wants to try and build an object storage-based HCI product.

HCI is a distinct threat now to pure, controller (server) plus array hardware-based software-defined storage vendors though. If I'm an HCI customer, I'm specifically rejecting buying shared external arrays in favour of converging external storage and servers as part of an HCI scheme. So that rules out VMX, FAS, 3PAR, and similar Nexenta-powered arrays, etc.

HCI is primarily block storage; what about filers? They face their own threats – going to the public cloud or being transformed into object storage as file system scalability runs out.

It seems that it is going to be a good idea for software-defined external storage array vendors to get a hyper-converged capability as soon as possible, and hitch a lift on the rising HCI tide. ®

More about

More about

More about

TIP US OFF

Send us news


Other stories you might like