Feeds

How to stop network traffic fighting like cat and dog

Picking the right route for the right packets

Designing a Defense for Mobile Applications

Sysadmin blog Bandwidth and latency are two separate but equally important network considerations. An ideal network will have high bandwidth and low latency. The real world is rarely so obliging.

For some applications, we don't care about latency. It doesn't really matter how long the packets in an FTP file transfer take to get from A to B, what we really care about is the aggregate bandwidth.

In other instances we have small to moderate amounts of data that need to move as close to realtime as possible. VoIP data is sensitive to latency issues, as are multiplayer video gaming and RDP. There are only two solutions to having both kinds of protocols living on the same network: traffic manage them or build out enough bandwidth to handle peak demand.

The world's internet service providers are facing this dilemma right now.

In general, ISPs feel that the growth in demand for bandwidth has outstripped their ability to provide capacity. As such they are increasingly turning to traffic management, either as a supplement to additional network build-outs or as a way to delay additional build-outs for as long as possible. Some carriers have found a good balance while others are handling it particularly badly.

Traffic management is generally bad for high-bandwidth services. If you regularly engage in shuffling around bulk quantities of data, then traffic-managed ISPs could be a problem for you. Businesses – especially smaller ones – are slowly gravitating towards online storage and backup services.

Services like Mozy, Dropbox or iDrive are just too handy. But they can exact an unforeseen toll if your ISP throttles your connection. Some ISPs may only slow the protocols involved in the file transfer. Others will throttle all traffic on your connection.

The other side of this coin is that a well-managed network is a godsend for people trying to get realtime work done remotely. RDP becomes a slide show at around 100msec of latency; at this latency it is useable, but only just. Roughly the same is true for VoIP and for most multiplayer video games.

For time sensitive protocols, latencies below 50msec are ideal. 100msec starts to become noticeable and 300msec is the "quit in frustration" point. Here, ISPs that properly manage their networks deliver a quality of service that is noticeably better than those that don't.

The end result is a complicated mess. In some cases you might well end up having to have connections to different ISPs for different kinds of traffic. My home province of Alberta is an excellent example.

We have two ISPs; one cable, one DSL. The cable operator manages traffic on its network fiercely. It also enforces traffic caps. The DSL operator doesn't manage traffic at all. It does not observe posted traffic caps. The two networks also have absolutely terrible peering; data traveling from one network to another will receive a big latency hit, and will move at low bandwidth.

So for bulk file transfers, it is good to have an account with the local DSL provider. It makes a great ISP to use for your corporate VPN ... provided, of course, your company also has a link on that ISP. The cable provider however is who you want for your RDP traffic. Again, assuming your company also has a link on that ISP.

We've had to build some interesting traffic direction systems to cope with this local ISP oddity, and I suspect that this sort of thing will become a lot more common around the world. Fights over peering are leading to a balkanisation of the internet. Differing traffic management policies and differences in last-mile technologies will end up with these different networks being attractive for separate but simultaneous critical usage cases.

Government attempts to control the internet – particularly in the United States – will also have an effect on network selection. In some nations data transmitted wirelessly has a different legal status than data that remains wired end-to-end. Corporations and individuals simply may not want some traffic transiting across networks (or through nations) with unfriendly legal frameworks.

For cloud services and remote/virtual desktops to really take off, we are going to have to start building networking gear that is not only content aware, but context aware. The right type of information transmitted to the right provider and arriving at the right subscriber only through approved intermediaries.

The simple days of "one internet link that does it all" are coming to a close just as we start to become ever more dependent upon remotely provisioned services. ®

The Power of One eBook: Top reasons to choose HP BladeSystem

More from The Register

next story
Apple fanbois SCREAM as update BRICKS their Macbook Airs
Ragegasm spills over as firmware upgrade kills machines
Auntie remains MYSTIFIED by that weekend BBC iPlayer and website outage
Still doing 'forensics' on the caching layer – Beeb digi wonk
Attack of the clones: Oracle's latest Red Hat Linux lookalike arrives
Oracle's Linux boss says Larry's Linux isn't just for Oracle apps anymore
THUD! WD plonks down SIX TERABYTE 'consumer NAS' fatboy
Now that's a LOT of porn or pirated movies. Or, you know, other consumer stuff
EU's top data cops to meet Google, Microsoft et al over 'right to be forgotten'
Plan to hammer out 'coherent' guidelines. Good luck chaps!
US judge: YES, cops or feds so can slurp an ENTIRE Gmail account
Crooks don't have folders labelled 'drug records', opines NY beak
Manic malware Mayhem spreads through Linux, FreeBSD web servers
And how Google could cripple infection rate in a second
prev story

Whitepapers

Designing a Defense for Mobile Applications
Learn about the various considerations for defending mobile applications - from the application architecture itself to the myriad testing technologies.
How modern custom applications can spur business growth
Learn how to create, deploy and manage custom applications without consuming or expanding the need for scarce, expensive IT resources.
Reducing security risks from open source software
Follow a few strategies and your organization can gain the full benefits of open source and the cloud without compromising the security of your applications.
Boost IT visibility and business value
How building a great service catalog relieves pressure points and demonstrates the value of IT service management.
Consolidation: the foundation for IT and business transformation
In this whitepaper learn how effective consolidation of IT and business resources can enable multiple, meaningful business benefits.