Feeds

Hackers' Paradise: The rise of soft options and the demise of hard choices

How it all went wrong for computer security

Secure remote control for conventional and virtual desktops

Opinion John Watkinson argues that the ubiquity of hacking and malware illustrates a failure of today’s computer architectures to support sufficient security. The mechanisms needed to implement a hack-proof computer have been available for decades but, self-evidently, they are not being properly applied.

The increasing power and low cost of computers means they are being used more and more widely, and put to uses which are becoming increasingly critical. By critical, I mean that the result of a failure could be far more than inconvenience.

Recently we became aware that hackers had found it was possible to open the doors of a Tesla car. But that’s not particularly exceptional: vulnerability is becoming the norm. Self-driving cars are with us too, and who is to blame if one of these is involved in a collision? What if it transpires that it was hacked?

It is not necessary to spell out possible scenarios in which insecure computers can allow catastrophes to occur. It is, in my view, only a matter of time before something really bad happens as a result of hacking or some other cause of IT unreliability. Unfortunately, it seems that it is only after such an event that something gets done. Until then complacency seems to rule.

In the classic Von Neumann computer architecture, there is an address space, most of which is used to address memory with the remainder used to address peripherals. The salient characteristic of the Von Neumann machine is that the memory doesn’t care what is stored in it.

John von Neumann and the IAS computer

John von Neumann was an inspiration but there was no room for sloppy programming in his designs
Photo: Alan Richards, courtesy Shelby White & Leon Levy Archives Center, Institute for Advanced Study (IAS)

Memory could contain instructions, stacks, data to be processed or results. Whilst this gives maximum flexibility, it also makes the system vulnerable to inappropriate programming. One incorrect address could mean writing data on top of the instructions or the stack or sending random commands to peripherals.

Before the Orwellian term “Information Technology” had been dreamed up by someone who probably called a plumber to deal with an overflow, we had computing. Computers were relatively expensive and had to be seen as a shared resource. In that case it was even less acceptable for a bug in one user’s code to bring the whole system down. Something was included in the system to prevent that.

One of the functions of memory management was effectively to isolate users or processes from one another and from the operating system. It didn’t matter whether the bug was due to an honest mistake, incompetence or malice, it would not compromise the whole system. The proliferation of hacking suggests that we are today forced to assume that malice will take place, rather than being surprised or disappointed after the event.

The cost of a CPU is largely a function of the word length, so there is pressure to keep the word length down to the precision needed for most jobs. On the odd occasion where this was not enough, double precision could be used where the processor would take two swipes at a longer data word that resided in a pair of memory locations.

The processor word length also limited the address range the processor could directly generate. As memory costs began to fall, it became possible to afford more memory than the CPU could directly address. This would be an ongoing phenomenon and another function of memory management would be to expand the address range.

Fig.1, below, shows a minimal memory management system. The memory management unit (MMU) fits between the address bus of the CPU and the main memory bus. The MMU has some registers that are in peripheral address space. The operating system can write these registers with address offsets, also known as relocation constants.

Minimal memory management system

Fig.1: A simplified memory management system – the program counter in the CPU no longer addresses memory directly, but produces a virtual address which enters the MMU. A relocation constant is added to the virtual address to create the physical address in memory

The relocation constant is added to the address coming out of the CPU, known as the virtual address, in order to create the actual RAM address, known as the physical address. The process of producing the physical address is called mapping. The term comes from cartography, where the difficulty of representing a spherical planet on flat paper inevitably caused distortion, so things weren't where you thought they were.

Intelligent flash storage arrays

Next page: Mainframe thinking

More from The Register

next story
PEAK APPLE: iOS 8 is least popular Cupertino mobile OS in all of HUMAN HISTORY
'Nerd release' finally staggers past 50 per cent adoption
Tim Cook: The classic iPod HAD TO DIE, and this is WHY
Apple, er, couldn’t get parts for HDD models
Apple spent just ONE DOLLAR beefing up the latest iPad Air 2
New iPads look a lot like the old one. There's a reason for that
Google Glassholes are UNDATEABLE – HP exec
You need an emotional connection, says touchy-feely MD... We can do that
Microsoft fitness bands slapped on wrists: All YOUR HEALTH DATA are BELONG TO US
Wearable will deliver 'actionable insights for healthier living'
Lawyers mobilise angry mob against Apple over alleged 2011 Macbook Pro crapness
We suffered 'random bouts of graphical distortion' - fanbois
Caterham Seven 160 review: The Raspberry Pi of motoring
Back to driving's basics with a joyously legal high
prev story

Whitepapers

Choosing cloud Backup services
Demystify how you can address your data protection needs in your small- to medium-sized business and select the best online backup service to meet your needs.
A strategic approach to identity relationship management
ForgeRock commissioned Forrester to evaluate companies’ IAM practices and requirements when it comes to customer-facing scenarios versus employee-facing ones.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
New hybrid storage solutions
Tackling data challenges through emerging hybrid storage solutions that enable optimum database performance whilst managing costs and increasingly large data stores.
Protecting users from Firesheep and other Sidejacking attacks with SSL
Discussing the vulnerabilities inherent in Wi-Fi networks, and how using TLS/SSL for your entire site will assure security.