Feeds

The internet just BROKE under its own weight – we explain how

Next time, big biz, listen to your network admin

Remote control for virtualized desktops

512KDay On Tuesday, 12 August, 2014, the internet hit an arbitrary limit of more than 512,000 routes. This 512K route limit is something we have known about for some time.

The fix for Cisco devices – and possibly others – is fairly straightforward. Internet service providers and businesses around the world chose not to address this issue in advance, as a result causing major outages around the world.

As part of the outage, punters experienced patchy – or even no – internet connectivity and lost access to all sorts of cloud-based services. The LastPass outage is being blamed by many on 512KDay, though official confirmation of this is still pending. I have been tracking reports of inability to access cloud services such as Office365 through to more localised phenomena from around the world, many of which look very much like they are 512KDay related.

As an example of the latter, while I don't yet have official confirmation yet from Canadian ISP Shaw, there are some indications are that the "mystery routing sickness" which affected its network (and which continues at time of publishing) could be related to the "512KDay" issue.

It is possible the issues I experienced with Shaw could be down to routers hitting the 512K limit. Theoretically, these routers could have hit the magic number and then been unable to route individual protocols (such as RDP, for example, although we cannot confirm this is so in Shaw's case) to the Deep Packet Inspection (DPI) systems that the ISP uses to create a "slow lane" enhance our internet experience*. We have contacted the ISP for comment but it had yet to respond at the time of publication.

As the fix for such issues can range from "applying a patch or config change and rebooting a core piece of critical network infrastructure" to "buy a new widget, the demand for which has just hit peak" there is every chance that 512KDay issues will continue for a few days (or even weeks) yet to come.

Others around the world have seen issues as well. Consider the issues reported by Jeff Bearer of Avere Systems who says "my firewall started noting packet loss between it and its upstream router. It wasn't that bad until employees started showing up for work, but then it jumped up quite a bit. We don't have any real evidence, but I did go back and forth with the ISP several times. It looks like it probably was [the 512KDay event] that caused this."

Awareness

Bearer asks a critical question: "Why wasn't this in the press, like Y2K or IPv4?".

Perhaps this is the ghost of Y2K. Globally, we handled the very real issues posed by computers being unable to comprehend the passing of the millennium so well that the average punter didn't notice the few systems that didn't get updated. IPv4 has been a highly publicised apocalypse that has dragged on for over a decade and the internet has yet to collapse.

512KDay is simply "yet another arbitrary limit issue" that has been for years filed away alongside the famous Y2K, IPv4 or 2038 problems. If you're interested in some of the others, Wikipedia has a brief overview of these "time formatting and storage bugs" that explains the big ones, but doesn't have a listing for all the known ones.

Do the media bear some of the blame? Perhaps. I have seen 512KDay issues raised in many IPv4 articles over the years, but rarely has it been discussed in a major publication as an issue in and of itself. Perhaps this is an example of crisis fatigue working its way into the technological sphere: as we rush from one manufactured "crisis" to another, we stop having brain space and resources to deal with the real issues that confront us.

The finger of blame

One thing I do know is that it is the job of network administrators to know about these issues and deal with them. What wasn't in the mainstream media has been in the networking-specific trade press, in vendor documentation and more.

I have been contacted by hundreds of network administrators in the past 12 hours with tales of woe. The common thread among them is that they absolutely did raise the flag on this, with virtually all of them being told to leave the pointy-haired boss's sight immediately.

Based on the evidence so far, I absolutely do not accept the inevitable sacrifice of some junior systems administrator to the baying masses. Throwing nerds under the bus doesn't cut it. The finger of blame points squarely at ISPs and other companies using BGP routers improperly all across the internet.

It's easy to make a boogyman out of ISPs; they're among the most hated industries in the world, after all. It's easy to point the finger of blame at companies that chose not to update their infrastructure because I've spent a lifetime fighting that battle from the coalface and it has made me a bitter and spiteful person.

Remote control for virtualized desktops

More from The Register

next story
Azure TITSUP caused by INFINITE LOOP
Fat fingered geo-block kept Aussies in the dark
NASA launches new climate model at SC14
75 days of supercomputing later ...
Yahoo! blames! MONSTER! email! OUTAGE! on! CUT! CABLE! bungle!
Weekend woe for BT as telco struggles to restore service
You think the CLOUD's insecure? It's BETTER than UK.GOV's DATA CENTRES
We don't even know where some of them ARE – Maude
DEATH by COMMENTS: WordPress XSS vuln is BIGGEST for YEARS
Trio of XSS turns attackers into admins
Cloud unicorns are extinct so DiData cloud mess was YOUR fault
Applications need to be built to handle TITSUP incidents
BOFH: WHERE did this 'fax-enabled' printer UPGRADE come from?
Don't worry about that cable, it's part of the config
Astro-boffins start opening universe simulation data
Got a supercomputer? Want to simulate a universe? Here you go
prev story

Whitepapers

Why and how to choose the right cloud vendor
The benefits of cloud-based storage in your processes. Eliminate onsite, disk-based backup and archiving in favor of cloud-based data protection.
Getting started with customer-focused identity management
Learn why identity is a fundamental requirement to digital growth, and how without it there is no way to identify and engage customers in a meaningful way.
10 threats to successful enterprise endpoint backup
10 threats to a successful backup including issues with BYOD, slow backups and ineffective security.
Reg Reader Research: SaaS based Email and Office Productivity Tools
Read this Reg reader report which provides advice and guidance for SMBs towards the use of SaaS based email and Office productivity tools.
The Heartbleed Bug: how to protect your business with Symantec
What happens when the next Heartbleed (or worse) comes along, and what can you do to weather another chapter in an all-too-familiar string of debilitating attacks?