Feeds

Hackers' Paradise: The rise of soft options and the demise of hard choices

How it all went wrong for computer security

Security for virtualized datacentres

Opinion John Watkinson argues that the ubiquity of hacking and malware illustrates a failure of today’s computer architectures to support sufficient security. The mechanisms needed to implement a hack-proof computer have been available for decades but, self-evidently, they are not being properly applied.

The increasing power and low cost of computers means they are being used more and more widely, and put to uses which are becoming increasingly critical. By critical, I mean that the result of a failure could be far more than inconvenience.

Recently we became aware that hackers had found it was possible to open the doors of a Tesla car. But that’s not particularly exceptional: vulnerability is becoming the norm. Self-driving cars are with us too, and who is to blame if one of these is involved in a collision? What if it transpires that it was hacked?

It is not necessary to spell out possible scenarios in which insecure computers can allow catastrophes to occur. It is, in my view, only a matter of time before something really bad happens as a result of hacking or some other cause of IT unreliability. Unfortunately, it seems that it is only after such an event that something gets done. Until then complacency seems to rule.

In the classic Von Neumann computer architecture, there is an address space, most of which is used to address memory with the remainder used to address peripherals. The salient characteristic of the Von Neumann machine is that the memory doesn’t care what is stored in it.

John von Neumann and the IAS computer

John von Neumann was an inspiration but there was no room for sloppy programming in his designs
Photo: Alan Richards, courtesy Shelby White & Leon Levy Archives Center, Institute for Advanced Study (IAS)

Memory could contain instructions, stacks, data to be processed or results. Whilst this gives maximum flexibility, it also makes the system vulnerable to inappropriate programming. One incorrect address could mean writing data on top of the instructions or the stack or sending random commands to peripherals.

Before the Orwellian term “Information Technology” had been dreamed up by someone who probably called a plumber to deal with an overflow, we had computing. Computers were relatively expensive and had to be seen as a shared resource. In that case it was even less acceptable for a bug in one user’s code to bring the whole system down. Something was included in the system to prevent that.

One of the functions of memory management was effectively to isolate users or processes from one another and from the operating system. It didn’t matter whether the bug was due to an honest mistake, incompetence or malice, it would not compromise the whole system. The proliferation of hacking suggests that we are today forced to assume that malice will take place, rather than being surprised or disappointed after the event.

The cost of a CPU is largely a function of the word length, so there is pressure to keep the word length down to the precision needed for most jobs. On the odd occasion where this was not enough, double precision could be used where the processor would take two swipes at a longer data word that resided in a pair of memory locations.

The processor word length also limited the address range the processor could directly generate. As memory costs began to fall, it became possible to afford more memory than the CPU could directly address. This would be an ongoing phenomenon and another function of memory management would be to expand the address range.

Fig.1, below, shows a minimal memory management system. The memory management unit (MMU) fits between the address bus of the CPU and the main memory bus. The MMU has some registers that are in peripheral address space. The operating system can write these registers with address offsets, also known as relocation constants.

Minimal memory management system

Fig.1: A simplified memory management system – the program counter in the CPU no longer addresses memory directly, but produces a virtual address which enters the MMU. A relocation constant is added to the virtual address to create the physical address in memory

The relocation constant is added to the address coming out of the CPU, known as the virtual address, in order to create the actual RAM address, known as the physical address. The process of producing the physical address is called mapping. The term comes from cartography, where the difficulty of representing a spherical planet on flat paper inevitably caused distortion, so things weren't where you thought they were.

Protecting users from Firesheep and other Sidejacking attacks with SSL

Next page: Mainframe thinking

More from The Register

next story
Oi, Tim Cook. Apple Watch. I DARE you to tell me, IN PERSON, that it's secure
State attorney demands Apple CEO bows the knee to him
Phones 4u website DIES as wounded mobe retailer struggles to stay above water
Founder blames 'ruthless network partners' for implosion
Monitors monitor's monitoring finds touch screens have 0.4% market share
Not four. Point four. Count yer booty again, Microsoft
Getting to the BOTTOM of the great office seating debate
Belay that toil, me hearty, and park your scurvy backside
Hey, Mac fanbois. HGST wants you drooling over its HUGE desktop RACK
What vast digital media repository could possibly need 64 TERABYTES?
In a spin: Samsung accuses LG exec of washing machine SABOTAGE
Rival electronic giant tries to iron out allegations
Lumia rebrand begins: Nokia's new UK web home is Microsoft.com
Yarr, them Nokia logos walking the plank and into the drink
prev story

Whitepapers

Secure remote control for conventional and virtual desktops
Balancing user privacy and privileged access, in accordance with compliance frameworks and legislation. Evaluating any potential remote control choice.
WIN a very cool portable ZX Spectrum
Win a one-off portable Spectrum built by legendary hardware hacker Ben Heck
Storage capacity and performance optimization at Mizuno USA
Mizuno USA turn to Tegile storage technology to solve both their SAN and backup issues.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
The next step in data security
With recent increased privacy concerns and computers becoming more powerful, the chance of hackers being able to crack smaller-sized RSA keys increases.