This article is more than 1 year old
Hyperconvergence: Just where is the technology going?
Just look over your shoulder
But physics will get in the way
In the medium term, though (2020 seems to be bandied about a lot based on extrapolation using Moore's law), it won't be possible to make the transistors on a microprocessor any smaller unless someone manages to make a transistor smaller than a single atom. (That's more than a little unlikely, by the way). So we'll have to think again. Similarly as you cram more memory on a single DIMM, the bus between the memory and its neighbours becomes congested and slows things down, and there's only so much data you can throw between the various bits of kit.
So … can we look backward and go forward?
I'm having a bit of a sense of deja-vu, though. I remember back in the 1990s when vendors were competing to cram as many processors in their SMP (Symmetric MultiProcessing) servers as possible – preferably more than the competition had announced the previous week. One of the vendors was running at six CPUs and was pretty frank with me: with that generation of architecture, adding any more than that just wouldn't be worth it as the overhead of distributing the work to the additional processors would almost entirely negate the purpose of having the extra CPUs.
And right around that time Sequent (remember them? IBM bought them in the end) were producing the NUMA-Q architecture … in which the memory was distributed around the processors instead of being a contended central resource.
I can't help thinking, then, that before long we're going to be seeing this happen again. Before processors get to the stage where the makers can't economically push the barriers of atom-level construction, they're going to start backing off on increasing the number of processing transistors and using the space for on-board memory. They'll be taking leaves out of Inmos's 1950s notebooks and designing processors specifically to interconnect to other processors in a massively parallel way; by that time network speeds available for interconnects will likely be measured in hundreds of gigabits per second, which is more than a little handy.
As we'll have to have more processors, we'll start thinking laterally about how we fit them in the data centre. Or more accurately we'll start imitating the likes of Google and Microsoft, who have realised that common server designs are inefficient and who are big enough to do something about it. So they've come up with new ways of making power supplies more space-effective: server PSUs are mostly empty space anyway, so they're starting to put the UPS on-board and do away with the need for power protection at data centre level. And they've realised that there are far more efficient (and less cabletastic) ways of controlling your servers than by connecting every one to a KVM server via VGA adaptors and miles of copper. We're sort-of doing this by buying blade-based servers, but there's a yawning chasm between the blade approach and this larger scale efficiency drive.
As for storage: that'll continue to play catch-up. It'll also get quieter in the long run because one of those technologies that seems to have been with us forever without changing hugely radically – spinning disks – will start dwindling noticeably in the next few years, and because as a result data centre cooling requirements will be reduced and get quieter. AS disks will continue to be the slow component, though, we'll look to setups that keep far more in on-board RAM than ever before.
And where…
As I can't resist the occasional literal answer to a metaphorical question: the technology will go into the cloud. Whether it's cloud in the traditional sense (i.e. an amorphous puddle of computing resource) or cloud in this new sense where actually it's dedicated physical server resource (so it's not really cloud at all but providers like the word and the lucrative bandwagon) it's the service providers that will own the kit and rent it to you.
And this is a good thing, because it means you can stop faffing about with server wrangling and get on with the other old-school concept that's suddenly (and rightly) had a resurgence in the last couple of years: software development (I dislike the word “coding” - it suggests bashing in code without designing it properly first). Someone else's techies will give you loads of power, and you can concentrate on getting the software to make the most of it. ®