This article is more than 1 year old

Channel surfers and the irresistible rise of Content Delivery Networks

When load balancing just won't cut it

Video: forget everything else

Your video, on the other hand, needs to receive everything you can throw at it. The compute resource will be far superior, and it needs to be to handle the increased overhead associated with streaming video.

The servers might be beefier and there will probably be a lot more of them, and the network traffic and load-balancing will have the taps fully-open for traffic of this type.

Think about delivering content like the different tiers in a storage array. If you’re got three tiers in your storage – all flash, fast SAS and slow SATA – then best practice is to segregate what you store and where you store it based on performance priority.

You’ll want your 'priority one' customer-facing applications (and probably your databases, if space allows) on the flash portion of your array. This would be your video playback, in our YouTube analogy.

Then your second-string systems – that’s everyday applications that perhaps have a lower business priority or smaller user footprint, and supporting servers and services – will go onto your second tier of storage, the SAS layer.

And your bottom-of-the-pile storage uses, in speed vs priority terms – backups, templates, infrequent read/write storage like file servers – will go on the near-line (SATA) storage. It’s a little easier to visualise the distributed delivery of different portions of an intangible online application based on their performance requirements, when you put it in terms of something we can place our hands on every day.

If you’re a US business just delivering small chunks of text then you might not mind your European or global customers doing a short round-trip over the ocean, but if you’re looking for a good user experience for high-bandwidth media then you’ll want to be installing content delivery nodes regionally and globally.

If that’s the case, then your load-balancing and network design needs to be really smart, too.

Once upon a time load-balancing was a relatively simple affair, akin to a basic traffic light system. Traffic was routed back to different servers in a strict order. The likes of Windows Network Load Balancing – which allows you to converge multiple servers onto a central IP for traffic routing purposes – seemed positively advanced compared with simple DNS round robin, but modern load-balancing technologies are a marvel.

They’ll not just route traffic based on a rudimentary order, they’ll probe your back-end servers to see who is best placed to receive the traffic. If they find a server is overloaded or having an issue they’ll remove it from load-balancing until the issue is resolved.

The capabilities of modern load balancers – like an F5 Big-IP or Citrix Netscaler – are really quite remarkable. They can make an intelligent decision of which server to pass connections back to based on a number of factors – CPU and memory load, disk access times, network interface usage, round-trip response time to its own probes, database query times – you can set virtually any criteria for it, to determine the best performance of your CDN.

They can also operate on the client’s information though, which is where they really come in handy for CDNs.

A rudimentary inspection of the client’s inbound connection by the load balancer can tell you where they’re coming from, what sort of connection speed they have, and even what browser and plug-ins they might be using.

It will then invoke GSLB (that’s Global Server Load Balancing) to route the customer through to the right geographical content delivery nodes to ensure they receive the best user experience.

More about

TIP US OFF

Send us news


Other stories you might like