This article is more than 1 year old

Hackers' Paradise: The rise of soft options and the demise of hard choices

How it all went wrong for computer security

Opinion John Watkinson argues that the ubiquity of hacking and malware illustrates a failure of today’s computer architectures to support sufficient security. The mechanisms needed to implement a hack-proof computer have been available for decades but, self-evidently, they are not being properly applied.

The increasing power and low cost of computers means they are being used more and more widely, and put to uses which are becoming increasingly critical. By critical, I mean that the result of a failure could be far more than inconvenience.

Recently we became aware that hackers had found it was possible to open the doors of a Tesla car. But that’s not particularly exceptional: vulnerability is becoming the norm. Self-driving cars are with us too, and who is to blame if one of these is involved in a collision? What if it transpires that it was hacked?

It is not necessary to spell out possible scenarios in which insecure computers can allow catastrophes to occur. It is, in my view, only a matter of time before something really bad happens as a result of hacking or some other cause of IT unreliability. Unfortunately, it seems that it is only after such an event that something gets done. Until then complacency seems to rule.

In the classic Von Neumann computer architecture, there is an address space, most of which is used to address memory with the remainder used to address peripherals. The salient characteristic of the Von Neumann machine is that the memory doesn’t care what is stored in it.

John von Neumann and the IAS computer

John von Neumann was an inspiration but there was no room for sloppy programming in his designs
Photo: Alan Richards, courtesy Shelby White & Leon Levy Archives Center, Institute for Advanced Study (IAS)

Memory could contain instructions, stacks, data to be processed or results. Whilst this gives maximum flexibility, it also makes the system vulnerable to inappropriate programming. One incorrect address could mean writing data on top of the instructions or the stack or sending random commands to peripherals.

Before the Orwellian term “Information Technology” had been dreamed up by someone who probably called a plumber to deal with an overflow, we had computing. Computers were relatively expensive and had to be seen as a shared resource. In that case it was even less acceptable for a bug in one user’s code to bring the whole system down. Something was included in the system to prevent that.

One of the functions of memory management was effectively to isolate users or processes from one another and from the operating system. It didn’t matter whether the bug was due to an honest mistake, incompetence or malice, it would not compromise the whole system. The proliferation of hacking suggests that we are today forced to assume that malice will take place, rather than being surprised or disappointed after the event.

The cost of a CPU is largely a function of the word length, so there is pressure to keep the word length down to the precision needed for most jobs. On the odd occasion where this was not enough, double precision could be used where the processor would take two swipes at a longer data word that resided in a pair of memory locations.

The processor word length also limited the address range the processor could directly generate. As memory costs began to fall, it became possible to afford more memory than the CPU could directly address. This would be an ongoing phenomenon and another function of memory management would be to expand the address range.

Fig.1, below, shows a minimal memory management system. The memory management unit (MMU) fits between the address bus of the CPU and the main memory bus. The MMU has some registers that are in peripheral address space. The operating system can write these registers with address offsets, also known as relocation constants.

Minimal memory management system

Fig.1: A simplified memory management system – the program counter in the CPU no longer addresses memory directly, but produces a virtual address which enters the MMU. A relocation constant is added to the virtual address to create the physical address in memory

The relocation constant is added to the address coming out of the CPU, known as the virtual address, in order to create the actual RAM address, known as the physical address. The process of producing the physical address is called mapping. The term comes from cartography, where the difficulty of representing a spherical planet on flat paper inevitably caused distortion, so things weren't where you thought they were.

Next page: Mainframe thinking

More about

TIP US OFF

Send us news


Other stories you might like