Hyperconvergence: Just where is the technology going?
Just look over your shoulder
When I started in business IT back in 1989, the machine room housed an IBM System/38 and an IBM PC-AT.
The latter was the Novell NetWare 2.0a server. The S/38 had its proprietary connections, and the PCs were connected by traditional Token Ring. In fact a couple of PCs had IBM 5250 adaptor cards and terminal emulators so they could talk to the S/38 too, and as I recall the favourite pastime of that fun combination was randomly ceasing to work properly.
It wasn't long before we'd bridged our Token Ring into Ethernet. And that's the thing: here we are 25 years later and we're still using a technology that's still called Ethernet, and still uses Ethernet frames for sending data. In fact it's only with the relatively recent introduction of 10GbE that the traditional CSMA/CD approach has gone by the wayside, and the principle of shared-media hubs has been discarded. Ironic really: some of the early networks were simple star shapes before more cost-effective bus and ring topologies came along, and here we are back in that regime.
All looks a bit familiar...
Some things don't change much over the decades, then. Unix-style operating systems are still with us, in both commercial and Open Source flavours. Windows is still going strong, though one has to question how much it resembles what we were using in the early 1990s. I can't quite decide whether MacOS belongs in this list of “still around”, given that the move from version 9 to version 10 was more than a little radical … but on balance I'll include it given that it still looks pretty much like a Mac should.
There's not a great deal, then, that's actually a brand spanking new concept in technology right now. At least not in production technology (quantum computing is a bit out of our scope right now). That's not to denigrate the amazing developments achieved by the likes of Intel and AMD in microprocessor design, or the network technologists that have taken speeds up by several orders of magnitude – but at the end of the day microprocessors are still little black spiders with silver legs that talk down data buses to memory and peripherals, and Ethernet still has the same shape frame flying down bits of electronic string. The main big-deal technology that's genuinely new is flash memory … but only in its current super-high-speed guise: flash has been around since the 1980s in its various forms.
Over the years there have also been some new developments, but they've fallen by the wayside thanks to the popularity of two-hundred-pound gorillas like Ethernet and the Intel processor range. So for example if you're old enough you'll remember that Apple's Macintosh range were based on 68000-series processors and then switched to PowerPC (initially with pretty slow 680x0 emulation for backward compatibility), but the eventual switch to Intel was pretty inevitable. And in the mid-1990s IBM tried to persuade us that Asynchronous Transfer Mode – an established concept in telecoms – was a fab desktop technology, primarily because they'd figured out that they could modify their 16Mbit/s Token Ring technology to run ATM at 25Mbit/s with funky quality-of-service guarantees that made things like desktop video possible. Sadly people realised that if you had 100Mbit/s Ethernet there was probably enough bandwidth headroom to do video without faffing about with QoS.
Much consolidation has happened over the years too, with many technologies falling by the wayside (at least in the corporate infrastructure). LocalTalk, EtherTalk, IPX, Token Ring, ARCnet, DECnet, the list goes on. Ironically one of the genuine innovations in recent technology was the cause of the deaths of such protocols: the realisation by the likes of Rapid City in the 1990s that if you dumped all this multi-protocol nonsense and concentrated on IP you could simplify routing and do it in hardware brought us a new generation of routing.
Anyway, enough of the history lesson: what lessons can we learn from these examples to extrapolate where things are going? Let's look at the core components of our infrastructure.
Within the computer, microprocessor technology will continue in the short term to squeeze more and more into the same form factor, and memory will continue to squeeze more ono a single DIMM. Again this isn't a put-down for the incredible work of the designers, but that's the future in the short and medium term. In the network we have either multiple Gigabit links bonded together, or 10GbE connections; although 100GbE is well and truly trotting toward us I reckon that we'll be bonding 10GbE connections for a few years yet if we want to go faster than ten gigabits per second. The big innovations that are going on right now are in the area of storage, whose development is moving so quickly that it's hard to decide when to upgrade to the current new, fast technology or wait six months for something newer and faster. And as I wrote recently the storage aspect of the infrastructure is where everything slows down and forms the weakest link, speed-wise.
Sponsored: Becoming a Pragmatic Security Leader