Original URL: https://www.theregister.com/2013/07/25/peeling_the_skin_off_networking_tech/

What did the Romans ever do for us? Packet switching...

And the railways, morse code. Sort of

By John Watkinson

Posted in Networks, 25th July 2013 09:04 GMT

Feature These days, it would be hard to find anyone who doesn't take networking for granted given the ubiquity of the Internet. Yet without some clever coding and data management techniques the Internet would crawl, if it worked at all. Digital video guru and IT author John Watkinson examines the various applications for networks and gets into the noughts and ones of what we have today and what the future may bring.

Morse code relies on the statistics of English. The most common letters are E and T, denoted by a single dot and a single dash respectively. Compare that with J, Q, Y and Z, which are uncommon and get long patterns. Sending languages other than English in Morse code may be unrewarding.

Networks came in to being for the wielding of political power. The road network created by the Roman Empire was probably the first example that showed any scientific input. The roads were dead straight wherever possible and it was found that a horse pulling a light carriage was faster than one carrying a rider. A map of Great Britain still shows political main roads radiating from London and a scarcity of circumferential routes.

Morse code relies on the statistics of English. The most common letters are E and T, denoted by a single dot and a single dash respectively. Compare that with J, Q, Y and Z, which are uncommon and get long patterns. Sending languages other than English in Morse code may be unrewarding.

The next generation of network was the railway, whose requirements for signalling and timekeeping were met by emerging telegraph technology that led to the changeover to the country-wide use of Greenwich Mean Time rather than local solar time from sundials. The telegraph also led to an early example of compression. As the adjacent image shows, Morse code is a variable length binary code in which the most common letters in English text are allocated the shortest symbols. The principle is still in use, in, for example, MPEG coding.

It is hard to give a brief definition of a network. One of the identifying features is that it is a shared and distributed resource. Sharing and distribution has benefits and drawbacks and the success of networks illustrates that the benefits by and large outweigh the drawbacks. Etymologically networks share the same roots as fishing nets. The catch from fishing nets is shared and a broken strand does not stop the whole net working.

Suppose that two people need to communicate occasionally. They can do that with a single piece of wire between them which lies unused a lot of the time. That this is not a scalable solution can be seen by considering instead 16 people. Each one must radiate 15 lengths of wire that lie unused even more of the time. The traditional telephone networks overcame that problem by using less wire that was shared and used more often. The cost for each person to be connected to a shared network is clearly less.

BT Network Records database

It's a wired world: BT Network Records is PDF database showing every access point in the country
Click for a larger image

The down side is that the shared system assumes diversity, which is the distribution of demand. It could not necessarily allow eight simultaneous conversations. The up side is that if the network was intelligently laid out, it would not be crippled by single point failures because there could be more than one connection between two points; so called redundancy or fault tolerance.

Different resources can be shared in a network. In the traditional telephone it was communication, whereas in an electricity system it is power. The UK National Grid is a true shared and distributed network with redundancy that comfortably pre-dates all IT networks. It was put to the test during WWII and contributed to the country’s ability to function despite heavy bombing.

In today’s different world that same grid continues to work well when supplanted by distributed inputs from photo-voltaics and wind power. It is hardly surprising that the Internet has its roots in networks developed for the US military that were also required literally to be bomb-proof. But then the whole of IT has its roots in military requirements, just as aviation does.

GEC PABX 3 switchboard and operator

Network switching the old fashioned way
Source: BritishTelephones.com

Whilst the traditional telephone used analogue signals, the development of digital technology led to methods to send data over communication channels. The hardware, or physical layer, could be copper, optical fibre or a microwave link. It was found that optical fibres have a number of advantages for long distance data transmission. They neither cause nor suffer from electrical interference and can thus be run alongside railway tracks and suspended with electricity supply cables.

Going the extra mile?

Arqiva Aerial Mast site 36223 on Cleeve Hill

Will the last mile be radio in future?

In countries where traditional telephony was in extensive use, there is always a large infra structure of copper wire between exchanges and subscribers, the so called last mile. The replacement of all that wire would be a daunting and expensive task. However, the distances involved are usually quite short. The telephone wires to the subscriber were intended for speech frequencies and are not at all optimal for high bit-rate data.

Nevertheless using sophisticated systems such as trellis coding, the traditional analogue copper infra structure can be re-used for data, as is explained in the box section on the next page. In developing countries, the absence of the traditional telephone network has not been a great drawback. In many places the copper wire stage has been completely leap-frogged and the last mile subscriber link is via radio.

The feature that distinguishes IT networks from the traditional telephone network is not just that one is digital. Analogue phones must use circuit switching whereas IT uses packet switching. When a traditional dial-up phone call was made, a continuous exclusive analogue electrical circuit was created between the two parties by switches, relays and uniselectors that were controlled once by the dialling procedure. The entire information route was created by the network and is denied to other parties for the duration of the call.

Talking digital: ADSL and Trellis Coding

Analogue telephone wires, intended for speech, display impedance mismatches and develop standing waves when used at high frequencies, such that their frequency response and noise level are highly irregular. In ADSL, the modulation scheme divides the spectrum up into hundreds of channels, or bins, only 4kHz wide.

The lowest channels are not used, so that the traditional analogue speech still works normally. Some channels communicate from exchange to subscriber, a smaller number work the other way, making the system asymmetrical and putting the A in ADSL.

Each channel is independently measured for data integrity and if it contains a frequency that is suffering from standing waves or cancellation, then its data rate will have to be reduced, or it may be abandoned altogether, whereas other channels can run at top speed. The ADSL system adjusts itself individually to the characteristics of each line.

Unlike the processing in a computer, in which the data bits and the electrical signalling are both binary, modems send the binary data using non-binary signalling. Instead of sending bits having two states, they send symbols having multiple states instead. For example, if a symbol can have sixteen states, then one symbol can convey all combinations of four bits.

The symbol rate is limited by the available bandwidth, so by getting more bits into each symbol, the bit rate can be raised. Clearly, if there are sixteen states, the difference between them is more readily confused by noise, but as telephone lines were specified for analogue speech, their noise performance is over-specified for binary.

Trellis code

Here is set of four symbols, each one of which has sixteen states. One route through the trellis for a particular 16 bit pattern is shown. Trellis coding works by playing with the different combinations of routes

If the information rate to be sent is thought of as being like a pat of butter whose width is the bandwidth and whose height is the number of bits per symbol, then if you squeeze it in on one axis, it has to get bigger on the other. An analogue telephone line has limited bandwidth but good noise performance, so you squeeze the bandwidth and use multi-level signalling.

Imagine four symbols each having sixteen levels or states, such that each symbol specifies four bits for a total of sixteen bits. The diagram shows that the levels vertically and the symbols horizontally form a trellis. Carrying 16 bits, there are 65,536 different routes through the trellis, one of which is shown. However, if we used the same trellis structure, but only sent 14 bits, then only 16,384 different routes would be needed and the remainder would be invalid.

So if noise pushed the signal away from the correct level in one of the symbols, we would detect an invalid route through the trellis and therefore detect the error. If we were smart we might be able to figure out which valid route was the closest to the invalid one and thereby correct it. This means the apparent noise performance of our channel improved.

Although we lost a couple of bits in the process, Trellis coding doesn’t sacrifice any data capacity, because the improved noise immunity allows us to use more levels so we win more than we lose. Clearly there is some clever processing going on in an ADSL modem (have you noticed they get quite hot?) and it’s a fundamental enabling technology. Until that clever stuff could be implemented at low cost in LSI chips, broadband would stumble at the last mile.

Crisp packets

Packet switching makes better use of the shared resource. A packet of data carries with it a header that denotes its destination. The network does not predetermine the entire path the packet will take. Instead any node in the network that receives a packet simply routes the packet in a direction that gets it closer to that destination. This is known as a connectionless protocol.

Network patchbay ports

Digital networking still needs patchbays

In a given hop the packet would use the same link as innumerable other packets, but at the next node or switch the packets would go their separate ways. The packets neither know nor care how a given hop is implemented. There is strong analogy with the use of containers for delivery of goods. A given container may find itself on a ship, a train or a truck in the course of its journey.

Using self-routing packets, the network only has to make simple local decisions, such as “Do I route this packet this way or that way” on the basis that one direction gets the packet to its destination sooner than another. In the case of a link failure the device at the head of the failed link would simply route incoming packets onto the next best route. In the case of link congestion some packets might also be so diverted. Essentially when a packet sets off, it is not known how it is going to get to its destination, anymore than a vehicle setting off on a road journey knows if it will meet a diversion.

The greatest efficiency is obtained when the packets have exactly the same size. A data link then becomes a time-multiplex of packets which all take the same time to transmit and so can easily be separated on receipt. A data file can be broken into packets for transmission and re-assembled on receipt. That re-assembly is assisted by a further code in the header that is a contiguous count of the packet number in the file. As packets do not necessarily take the same route they will not necessarily arrive in the same order and the sequence code allows them to be re-ordered. Missing packets can also be detected.

The action to be taken in the case of a missing packet depends on how time-critical the message is. In many cases a re-transmit of the missing packet may be enough. If the application does not allow time for re-transmission then the packets must be protected by forward error correction (FEC) that allows a lost packet to be re-constructed from the adjacent packets that did arrive. Without the ability to transmit error-free data the Internet would be a complete flop. You could forget software downloads and updates. And MPEG coding relies on error-free transmission, so no YouTube either.

Optical profusion

One of the most significant developments in optical communications was the single mode fibre. The illustration below shows that earlier fibres worked by bouncing the light pulses internally within the fibre. The reflections were not well controlled, with the result that some light travelled nearly parallel to the axis of the fibre and arrived soonest, whereas some light reflected many times on an oblique path and arrived later. This spread of propagation times due to multiple modes of propagation led to sharp transmitted edges between bits becoming indistinct at the receiver, which set a limit on the length of the link.
Optical fibre propagation

Early optical fibres allowed the light to bounce around inside (a) so that the effective distance travelled was not constant. This had the effect of smearing sharp edges in the signal. Single mode fibres (b) overcome that problem as the light can only take one route

In single mode fibre, the diameter of the glass fibre is so small in comparison to the wavelength of the light that the only propagation mode that can take place is that of a plane wave front travelling parallel to the axis. With only one propagation mode, the smearing of pulse edges is dramatically reduced. Clearly this represents the ultimate communication medium and it is difficult to see what would improve on it. The final flourish is the use of multiple light sources having different wavelengths sharing the same fibre. This is known as WDM or wavelength division multiplexing.

Reading the signals

It did not take long for the backbones of traditional telephone networks to be replaced with packet switching. Analogue speech would be converted to digital at the exchange, delivered by data packets to the destination exchange and converted back to analogue again to be compatible with the existing telephones. Adequate intelligibility of speech can be obtained with simple sampling at 64kilobits/second and that bit rate became the basic unit of telco data streams. ISDN offered multiples of that bit rate for data transmission.

Packet multiplexing is the most efficient way of delivering multiple types of data down common channels and as a result it is not going to be superseded nor does it need to be. The principle will also be found in digital TV and radio transmissions. One does not simply tune to a digital TV transmission in the traditional sense. The RF section of a set top box recreates a bit stream, known as a Transport Stream, which is a multiplex of identically sized data packets. That multiplex carries the video, audio and program guides for several conventional TV programs.

SMPTE 304M connector

SMPTE 304M interface for broadcast cameras uses fibre optic cable to cover longer distances than copper allows

When the viewer chooses a TV channel, the STB has to tune to the correct multiplex, but then it has to consult the metadata to establish the packet addresses of the video and the audio in the selected language. Those packets are then selected from the Transport Stream. Audio visual data packets need to be displayed with a given time base and with synchronisation between sound and picture. This is assisted by additional mechanisms that generic networks don’t have, such as the recreation of a stable reference clock in the STB and the transmission of time stamps along with the packets which denote when they need to be presented to the viewer.

One of the characteristics of packet switched networks is that it is only the total bandwidth of a link that is known. Clearly as more transactions are required, less bandwidth is available to each and so transmission of an individual transaction will slow down. Because networks are shared resources they operate on usage statistics and when those statistics prevail, they work well, but like any shared resource, if there is a deviation from the usual statistics, they cannot be expected to work normally. In the same way roads that normally flow freely become choked with traffic on a bank holiday, and Morse code isn’t very efficient if it is used for Polish.

And what of the future? The bandwidth available using optical fibres is without real limit, so the backbones of data networks can grow indefinitely. However there is no doubting the growth of personal devices such as iPhones and tablets where the last mile of the network is achieved by radio link. The radio spectrum is, however, not infinite and space needs to be found for these communications.

It is naïve to think that digital television and sound radio broadcasts were about quality. These technologies were adopted so that compression could be used to reduce the radio bandwidth needed, thus freeing up bandwidth for other services. Heavily compressed DAB is the sonic equivalent of Michelangelo's David made out of Lego and the early advertising claiming CD quality had to be pulled because it was a pack of lies.

Virgin Media dish array

Mix and match: Virgin Media dish array takes satellite programming and relays it down optical cable

Televisions screens get larger and larger in inverse proportion to the programme quality and no-one still pretends they are portable. It is legitimate to question whether fixed devices warrant the use of terrestrial radio transmission when a cable or fibre can be used.

But the problem may be temporary. In the light of mass access to networks which are bidirectional and which offer a limitless information resource and the freedom to comment on it, legacy media such a newspapers, television and sound radio are starting to appear distinctly propagandist in comparison. It’s only on an alternative medium such as this one that I am allowed to say so. Networks have gone from wielding political power to containing it. ®

John Watkinson is a member of the British Computer Society and a Chartered Information Technology Professional. He is also an international consultant on digital imaging and the author of numerous books regarded as industry bibles.