Flash memory will send hyperconvergence to hyper-speed

More data, faster, is all very well until it hits your network and servers. Which is why convergence matters

If you listen to all-flash array vendors's shiniest, happiest, propaganda, you'll learn that moving from disk to solid state storage will more or less immediately turn your organisation into a white-hot innovator. Flash puts you in with a chance of both becoming the Uber of [insert your industry here] and avoiding an Uber-reaming by some new competitor where not a single employee has ever owned a suit.

Just add Flash, the story goes, and habitual suit-wearers and shareholders alike will sail away into the sunset on yachts they bought courtesy of a soaring share price.

What they're not saying in public is that flash arrays can also make a royal mess of your current rigs, in two ways.

The first mess Flash can create is on your network, which almost certainly wasn't built to handle the kind of input/output per second (IOPS) Flash arrays can deliver. Networks may therefore choke/stutter/collapse once asked to handle more data than they've seen in years. Hardened IT pros know this. Folks who read about Flash in an airline magazine, or a business magazine in an airline lounge, are going to need some education about adopting all-Flash arrays requiring a bit more work than is required to wield an RJ45 clip.

The second mess is inside your servers. The Register's virtualisation desk hears that one side-effect of a new Flash array is unexpectedly high CPU utilisation rates that come about because more data is reaching a server. That's perfectly fine … unless you've tuned your hosts to certain CPU utilisation rates to cope with your very own server virtualisation scenarios.

Both of these issues are fixable. Just buy more stuff, implement it, fight through the inevitable glitches, get it all working and … then you'll be at the start line for that innovation surge.

Or just buy hyper-converged systems. Purveyors of such systems are already starting to make noises about their rigs letting you put Flash to work without all the mucky integrating required in roll-your-own systems, or template-defined converged systems.

That argument's going to get louder and louder before long, as when the next storage media off the production line hit they'll make Flash look like a snail. Systems that keep traffic inside a box - or at least running North-South inside a rack rather than East-West between racks – are going to look pretty good once even a mention of letting data loose onto a network will make a switch's ports shake with apprehension.

The industry knows this. Which is why Cisco, for example, is offering 100Gbps Ethernet at 40Gbps prices. The Borg and its competitors know that demand for data centre bandwidth is already booming and bound to boom again.

And which is why senior industry bods like John Donovan, Lenovo's executive director for enterprise product management, tell El Reg they see hyper-convergence taking off in a big way once NVMe prices get more pleasant.

It's clear that NVMe and 3DXpoint will – pardon the D-word – disrupt the way business networks and server fleets are designed. And also clear that figuring out how to build kit to cope with the changes these new technologies unleash will be needed before anyone can become the Uber of anything. ®

Biting the hand that feeds IT © 1998–2018