Feeds

What did the Romans ever do for us? Packet switching...

And the railways, morse code. Sort of

Internet Security Threat Report 2014

Crisp packets

Packet switching makes better use of the shared resource. A packet of data carries with it a header that denotes its destination. The network does not predetermine the entire path the packet will take. Instead any node in the network that receives a packet simply routes the packet in a direction that gets it closer to that destination. This is known as a connectionless protocol.

Network patchbay ports

Digital networking still needs patchbays

In a given hop the packet would use the same link as innumerable other packets, but at the next node or switch the packets would go their separate ways. The packets neither know nor care how a given hop is implemented. There is strong analogy with the use of containers for delivery of goods. A given container may find itself on a ship, a train or a truck in the course of its journey.

Using self-routing packets, the network only has to make simple local decisions, such as “Do I route this packet this way or that way” on the basis that one direction gets the packet to its destination sooner than another. In the case of a link failure the device at the head of the failed link would simply route incoming packets onto the next best route. In the case of link congestion some packets might also be so diverted. Essentially when a packet sets off, it is not known how it is going to get to its destination, anymore than a vehicle setting off on a road journey knows if it will meet a diversion.

The greatest efficiency is obtained when the packets have exactly the same size. A data link then becomes a time-multiplex of packets which all take the same time to transmit and so can easily be separated on receipt. A data file can be broken into packets for transmission and re-assembled on receipt. That re-assembly is assisted by a further code in the header that is a contiguous count of the packet number in the file. As packets do not necessarily take the same route they will not necessarily arrive in the same order and the sequence code allows them to be re-ordered. Missing packets can also be detected.

The action to be taken in the case of a missing packet depends on how time-critical the message is. In many cases a re-transmit of the missing packet may be enough. If the application does not allow time for re-transmission then the packets must be protected by forward error correction (FEC) that allows a lost packet to be re-constructed from the adjacent packets that did arrive. Without the ability to transmit error-free data the Internet would be a complete flop. You could forget software downloads and updates. And MPEG coding relies on error-free transmission, so no YouTube either.

Optical profusion

One of the most significant developments in optical communications was the single mode fibre. The illustration below shows that earlier fibres worked by bouncing the light pulses internally within the fibre. The reflections were not well controlled, with the result that some light travelled nearly parallel to the axis of the fibre and arrived soonest, whereas some light reflected many times on an oblique path and arrived later. This spread of propagation times due to multiple modes of propagation led to sharp transmitted edges between bits becoming indistinct at the receiver, which set a limit on the length of the link.
Optical fibre propagation

Early optical fibres allowed the light to bounce around inside (a) so that the effective distance travelled was not constant. This had the effect of smearing sharp edges in the signal. Single mode fibres (b) overcome that problem as the light can only take one route

In single mode fibre, the diameter of the glass fibre is so small in comparison to the wavelength of the light that the only propagation mode that can take place is that of a plane wave front travelling parallel to the axis. With only one propagation mode, the smearing of pulse edges is dramatically reduced. Clearly this represents the ultimate communication medium and it is difficult to see what would improve on it. The final flourish is the use of multiple light sources having different wavelengths sharing the same fibre. This is known as WDM or wavelength division multiplexing.

Internet Security Threat Report 2014

Next page: Reading the signals

More from The Register

next story
Docker's app containers are coming to Windows Server, says Microsoft
MS chases app deployment speeds already enjoyed by Linux devs
IBM storage revenues sink: 'We are disappointed,' says CEO
Time to put the storage biz up for sale?
'Hmm, why CAN'T I run a water pipe through that rack of media servers?'
Leaving Las Vegas for Armenia kludging and Dubai dune bashing
Facebook slurps 'paste sites' for STOLEN passwords, sprinkles on hash and salt
Zuck's ad empire DOESN'T see details in plain text. Phew!
Windows 10: Forget Cloudobile, put Security and Privacy First
But - dammit - It would be insane to say 'don't collect, because NSA'
Symantec backs out of Backup Exec: Plans to can appliance in Jan
Will still provide support to existing customers
VMware's tool to harden virtual networks: a spreadsheet
NSX security guide lands in intriguing format
prev story

Whitepapers

Forging a new future with identity relationship management
Learn about ForgeRock's next generation IRM platform and how it is designed to empower CEOS's and enterprises to engage with consumers.
Cloud and hybrid-cloud data protection for VMware
Learn how quick and easy it is to configure backups and perform restores for VMware environments.
Three 1TB solid state scorchers up for grabs
Big SSDs can be expensive but think big and think free because you could be the lucky winner of one of three 1TB Samsung SSD 840 EVO drives that we’re giving away worth over £300 apiece.
Reg Reader Research: SaaS based Email and Office Productivity Tools
Read this Reg reader report which provides advice and guidance for SMBs towards the use of SaaS based email and Office productivity tools.
Security for virtualized datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.