The internet just BROKE under its own weight – we explain how
Next time, big biz, listen to your network admin
The fix for Cisco devices – and possibly others – is fairly straightforward. Internet service providers and businesses around the world chose not to address this issue in advance, as a result causing major outages around the world.
As part of the outage, punters experienced patchy – or even no – internet connectivity and lost access to all sorts of cloud-based services. The LastPass outage is being blamed by many on 512KDay, though official confirmation of this is still pending. I have been tracking reports of inability to access cloud services such as Office365 through to more localised phenomena from around the world, many of which look very much like they are 512KDay related.
As an example of the latter, while I don't yet have official confirmation yet from Canadian ISP Shaw, there are some indications are that the "mystery routing sickness" which affected its network (and which continues at time of publishing) could be related to the "512KDay" issue.
It is possible the issues I experienced with Shaw could be down to routers hitting the 512K limit. Theoretically, these routers could have hit the magic number and then been unable to route individual protocols (such as RDP, for example, although we cannot confirm this is so in Shaw's case) to the Deep Packet Inspection (DPI) systems that the ISP uses to
create a "slow lane" enhance our internet experience*. We have contacted the ISP for comment but it had yet to respond at the time of publication.
As the fix for such issues can range from "applying a patch or config change and rebooting a core piece of critical network infrastructure" to "buy a new widget, the demand for which has just hit peak" there is every chance that 512KDay issues will continue for a few days (or even weeks) yet to come.
Others around the world have seen issues as well. Consider the issues reported by Jeff Bearer of Avere Systems who says "my firewall started noting packet loss between it and its upstream router. It wasn't that bad until employees started showing up for work, but then it jumped up quite a bit. We don't have any real evidence, but I did go back and forth with the ISP several times. It looks like it probably was [the 512KDay event] that caused this."
Bearer asks a critical question: "Why wasn't this in the press, like Y2K or IPv4?".
Perhaps this is the ghost of Y2K. Globally, we handled the very real issues posed by computers being unable to comprehend the passing of the millennium so well that the average punter didn't notice the few systems that didn't get updated. IPv4 has been a highly publicised apocalypse that has dragged on for over a decade and the internet has yet to collapse.
512KDay is simply "yet another arbitrary limit issue" that has been for years filed away alongside the famous Y2K, IPv4 or 2038 problems. If you're interested in some of the others, Wikipedia has a brief overview of these "time formatting and storage bugs" that explains the big ones, but doesn't have a listing for all the known ones.
Do the media bear some of the blame? Perhaps. I have seen 512KDay issues raised in many IPv4 articles over the years, but rarely has it been discussed in a major publication as an issue in and of itself. Perhaps this is an example of crisis fatigue working its way into the technological sphere: as we rush from one manufactured "crisis" to another, we stop having brain space and resources to deal with the real issues that confront us.
The finger of blame
One thing I do know is that it is the job of network administrators to know about these issues and deal with them. What wasn't in the mainstream media has been in the networking-specific trade press, in vendor documentation and more.
I have been contacted by hundreds of network administrators in the past 12 hours with tales of woe. The common thread among them is that they absolutely did raise the flag on this, with virtually all of them being told to leave the pointy-haired boss's sight immediately.
Based on the evidence so far, I absolutely do not accept the inevitable sacrifice of some junior systems administrator to the baying masses. Throwing nerds under the bus doesn't cut it. The finger of blame points squarely at ISPs and other companies using BGP routers improperly all across the internet.
It's easy to make a boogyman out of ISPs; they're among the most hated industries in the world, after all. It's easy to point the finger of blame at companies that chose not to update their infrastructure because I've spent a lifetime fighting that battle from the coalface and it has made me a bitter and spiteful person.
Sponsored: Global DDoS threat landscape report