How to stop network traffic fighting like cat and dog

Picking the right route for the right packets

Reducing the cost and complexity of web vulnerability management

Sysadmin blog Bandwidth and latency are two separate but equally important network considerations. An ideal network will have high bandwidth and low latency. The real world is rarely so obliging.

For some applications, we don't care about latency. It doesn't really matter how long the packets in an FTP file transfer take to get from A to B, what we really care about is the aggregate bandwidth.

In other instances we have small to moderate amounts of data that need to move as close to realtime as possible. VoIP data is sensitive to latency issues, as are multiplayer video gaming and RDP. There are only two solutions to having both kinds of protocols living on the same network: traffic manage them or build out enough bandwidth to handle peak demand.

The world's internet service providers are facing this dilemma right now.

In general, ISPs feel that the growth in demand for bandwidth has outstripped their ability to provide capacity. As such they are increasingly turning to traffic management, either as a supplement to additional network build-outs or as a way to delay additional build-outs for as long as possible. Some carriers have found a good balance while others are handling it particularly badly.

Traffic management is generally bad for high-bandwidth services. If you regularly engage in shuffling around bulk quantities of data, then traffic-managed ISPs could be a problem for you. Businesses – especially smaller ones – are slowly gravitating towards online storage and backup services.

Services like Mozy, Dropbox or iDrive are just too handy. But they can exact an unforeseen toll if your ISP throttles your connection. Some ISPs may only slow the protocols involved in the file transfer. Others will throttle all traffic on your connection.

The other side of this coin is that a well-managed network is a godsend for people trying to get realtime work done remotely. RDP becomes a slide show at around 100msec of latency; at this latency it is useable, but only just. Roughly the same is true for VoIP and for most multiplayer video games.

For time sensitive protocols, latencies below 50msec are ideal. 100msec starts to become noticeable and 300msec is the "quit in frustration" point. Here, ISPs that properly manage their networks deliver a quality of service that is noticeably better than those that don't.

The end result is a complicated mess. In some cases you might well end up having to have connections to different ISPs for different kinds of traffic. My home province of Alberta is an excellent example.

We have two ISPs; one cable, one DSL. The cable operator manages traffic on its network fiercely. It also enforces traffic caps. The DSL operator doesn't manage traffic at all. It does not observe posted traffic caps. The two networks also have absolutely terrible peering; data traveling from one network to another will receive a big latency hit, and will move at low bandwidth.

So for bulk file transfers, it is good to have an account with the local DSL provider. It makes a great ISP to use for your corporate VPN ... provided, of course, your company also has a link on that ISP. The cable provider however is who you want for your RDP traffic. Again, assuming your company also has a link on that ISP.

We've had to build some interesting traffic direction systems to cope with this local ISP oddity, and I suspect that this sort of thing will become a lot more common around the world. Fights over peering are leading to a balkanisation of the internet. Differing traffic management policies and differences in last-mile technologies will end up with these different networks being attractive for separate but simultaneous critical usage cases.

Government attempts to control the internet – particularly in the United States – will also have an effect on network selection. In some nations data transmitted wirelessly has a different legal status than data that remains wired end-to-end. Corporations and individuals simply may not want some traffic transiting across networks (or through nations) with unfriendly legal frameworks.

For cloud services and remote/virtual desktops to really take off, we are going to have to start building networking gear that is not only content aware, but context aware. The right type of information transmitted to the right provider and arriving at the right subscriber only through approved intermediaries.

The simple days of "one internet link that does it all" are coming to a close just as we start to become ever more dependent upon remotely provisioned services. ®

Choosing a cloud hosting partner with confidence


Providing a secure and efficient Helpdesk
A single remote control platform for user support is be key to providing an efficient helpdesk. Retain full control over the way in which screen and keystroke data is transmitted.
Saudi Petroleum chooses Tegile storage solution
A storage solution that addresses company growth and performance for business-critical applications of caseware archive and search along with other key operational systems.
Security and trust: The backbone of doing business over the internet
Explores the current state of website security and the contributions Symantec is making to help organizations protect critical data and build trust with customers.
Reg Reader Research: SaaS based Email and Office Productivity Tools
Read this Reg reader report which provides advice and guidance for SMBs towards the use of SaaS based email and Office productivity tools.
Security for virtualized datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.