Original URL: https://www.theregister.com/2013/09/30/servers_hyperscaling/

Hyperscaling gives you power when you need it

How to grow and shrink like Alice

By Dave Cartwright

Posted in Networks, 30th September 2013 17:03 GMT

Hyperscale computing, or simply hyperscaling, is a concept that has begun to be talked about relatively recently.

Let's kick off with Webopedia's definition: Hyperscale computing refers to the infrastructure and provisioning needed in distributed computing environments for effectively scaling from several servers to thousands of servers. Hyperscale computing is often employed in environments like cloud computing and big data.

Déja vu again

Hang on a moment, though. Almost exactly 10 years ago I wrote a feature for another publication about the then new concept of grid computing.

I described grid as an extension of massively parallel processing, in which you use a set of computers that are connected arbitrarily and sometimes widely dispersed geographically, potentially with several different owners.

“Instead of having one vast parallel system, you have a number of smaller ones, possibly on different sites, which can be used in whatever combinations are deemed appropriate. So systems can be used independently or pooled into any size of parallel system depending on demand,” I wrote.

So isn't hyperscaling simply the current name for, or to be more generous, a new variation on, grid computing?

Many of the concepts are similar – specifically the idea of growing and shrinking the compute power available for a given problem at a given time as demand dictates.

Shadowy presence

Is it a genuinely new concept? Or has it simply been hiding in the shadows for a few years while concepts such as software defined networking (SDN) and software defined storage (SDS) have developed to the extent that it is possible to actually implement it on more than a local or metropolitan scale?

Nick Williams, EMEA senior product manager at Brocade, thinks the demand for hyperscaling is increasing “due to the explosion in general of compute, storage and networking consumption”.

“SDN is a key enabler for managing and orchestrating hyperscaling. I agree that advances in this area are key in being able to manage vast compute resources for specific functions as needed,” he says.

The idea of scaling computing power at will to satisfy demand, then giving it up when things calm down, is one of the big selling points of cloud computing (another term that came along quite recently).

Years before cloud existed I worked for a travel company whose demand in the first week of January was more than 30 times greater than in the week leading up to Christmas. The company dealt with this simply by owning a socking great Sun server that worked quite hard for a small percentage of its life and spent most of its time just ticking over.

Pass the parcel

These days, of course, we hear all the time of companies using cloud processing to scale up and down. One of the favourites examples is the online ticket companies that expand their compute power for a day or two when Elton John tickets go on sale, then wind down until the Stones decide to do another tour.

There is a problem with this: although you can cut costs and maximise performance by dumping your in-house hardware and using a cloud service, the resourcing problem does not go away.

Instead it just moves it one step upstream, as the cloud provider now has to find the power to keep up with all its clients' fluctuating requirements.

The law of averages helps out the service providers to a certain extent, because unless they concentrate their sales effort on a small number of vertical markets (by taking on loads of travel companies with similar demand patterns, for example) then different clients are quite likely to have big demands at different times.

You can't rely on a completely even distribution of loading, however, so the cloud provider has either to over-provision or be a bit clever about how it services demand.

There must be mileage in enabling smaller companies to borrow someone else's computing power

The likes of Amazon and Microsoft are large enough to own the hardware within which thousands of customers expand while thousands of others contract, but surely there must be mileage in enabling smaller companies to step outside and borrow someone else's computing power to service their large but infrequent surges in demand.

“That is definitely where it is going, although there are still too many technology boundaries,” says Kurt Glazemakers, CTO at CloudFounders.

“Hardware used to be the biggest issue. Hardware (meaning compute and storage technology) needed to be the same to start using capacity and it was hard to share resources.

“Virtualisation has solved this, but the hypervisor (and the SDN and SDS features that come with it) has become the new technology boundary. It is very easy to start using Amazon, Microsoft or Google but it is much harder to move away.

“And with VMware now becoming a service provider, the same counts for the VMware hypervisor. You can’t easily move compute capacity when both locations have different hypervisors.”

Distance no object

Glazemakers's reference to SDN is crucial, because it is the main difference between grid and hyperscaling.

Even when grid was in its infancy, vendors did cunning things such as taking dissimilar hardware and making it look similar by whacking a Java Runtime Environment (JRE) on top of it. The software doing the work neither knew nor cared about the hardware because it just ran on the JRE.

Admittedly, you had to do a lot of tricky manual work to make the devices communicate with each other, particularly when they were on different sites (for example, at a bunch of collaborating universities sharing compute power).

But we have now reached the point where SDN and its peers are allowing us to bridge that gap.

Two organisations at opposite ends of the internet can implement virtual machines on their sites which think they are on the same subnet despite being multiple Layer 3 hops apart. The SDN layer is doing some funky work to make the network behave like a virtual Layer 2 switch instead of a routed Layer 3 WAN.

So where each grid processing task in the old days tended to be specific to one project or chunk of work, the platform can now be more multi-purpose.

“With the software and virtualisation approach it becomes easier and quicker to scale up or down. Grid computing from the old days was typically designed to scale up or down for specific tasks,” says Glazemakers.

“With the current technology, there are hardly any limits on the potential use cases. It is the biggest difference.”

It's just an illusion

David Noguer Bau, head of service provider marketing EMEA at Juniper, seems to think along the same lines.

“Cloud and grid computing developed models to scale largely a number of processes (cloud) or split a process in parallel computing model (grid). But both lacked interaction with the network,” he says.

“SDN provides to cloud (via integration with the orchestrator) a way to evolve the network configuration as fast as virtual machines do.”

When I quoted Webopedia's definition of hyperscaling, there were a few words missing: an almost throwaway clause in the last sentence says: [Hyperscale] is commonly associated with platforms like Apache Hadoop.

Now Hadoop, a software framework for building distributed applications, has been around since 2005. Distributed computing is, after all, a long-established concept. Is hyperscaling anything new, then? In a word, no.

We can take an application and run it on a set of machines in different locations

The point is, though, that SDN provides us with new and far simpler ways of achieving it. To implement hyperscale using Hadoop you need to architect your software using its framework. With SDN you may not even need to do anything special at all with your software.

It gives us a world where we can take an application that is designed to run on multiple servers near each other and run it on a set of machines in different locations because SDN makes it think that it is on a local network.

So long as you are not constrained by the laws of physics (even the smartest SDN implementation can't give you a sub-millisecond round-trip time between London and Glasgow, though with protocol spoofing it can have a go some of the time), SDN will make distributed computing easier and easier.

Anyone can do it

As distributed computing gets easier, so it becomes possible to scale your compute resources on demand outside your infrastructure and in someone else's – your cloud provider or a higher-tier service provider whose kit you dip into when you run out of power in your network.

Although hyperscaling concepts have been around for a while, SDN is taking us a big step forward in being able to do hyperscaling more flexibly, faster and with considerably less expertise.

With many network hardware vendors supporting SDN concepts in their routers and switches, and the virtualisation layer manufacturers supporting the standards within the layers of the enterprise that they are providing, hyperscaling is becoming open to all of us.

Whether it will be widely adopted or remain a niche concept remains to be seen, of course.

Williams sums it up. “I think SDN use will grow significantly within the data centre and service provider networks over the next one to three years, particularly in the area of orchestration through frameworks such as OpenStack with open APIs, and programmatic control through OpenFlow,” he says.

“Hyperscaling will increase – but to what level is hard to define. What is clear is that SDN will be a key enabler in managing large-scale combinations of compute resources.” ®