How von Neumann still controls the desktop
Putting the business first
Workshop When John Von Neumann first wrote up his notes about the logical design of the EDVAC computer on a train journey to Los Alamos in 1946, it is unlikely that he fully appreciated the impact they would have.
For all their complexity, cores and threads, their caches and bus architectures, modern computers still follow which is known (no doubt to the consternation of his team mates, whose names were excluded from the report) as the von Neumann architecture. Simply put, this involves one element of the computer processor dealing with the arithmetic operations, another dealing with controlling what happens, and some memory or storage capability shared between the two.
It’s an important point that, while desktop computers are small, sleek and designed for use by a single individual, and server computers tend to be rack-mounted and hidden in computer racks and equipment rooms, they both do pretty much the same thing. This principle is at the heart of a fundamental trade-off when it comes to designing IT hardware architectures – that the processing and storage has to take place somewhere. If the workload is not going to be on the desktop, it’s going to take up CPU cycles and disk capacity on the server, and the requisite information needs to get from one to the other in some way.
This may sound obvious, but it’s a principle that is frequently forgotten. In the heyday of 3D environments such as Second Life, for example, one anecdote doing the rounds was that it could take up to one server’s worth of resources for a single avatar. While these discussions have gone by the wayside (as has Second Life, to many), the fact it caused a stir at all, illustrates the ‘out of sight, out of mind’ attitude that can be associated with server-side processing.
Of course, when it comes to deciding whether something should be run on server or on a desktop/laptop client, processing power and data storage are not the only criteria to be balanced. Other factors include:
• The level of security required on the client, in that it is much easier to provide a locked down environment if the majority of processing (and hence data storage) takes place server-side.
• Networking considerations, both in terms of available bandwidth and network reliability – if either are poor, it makes more sense to load up a more powerful client.
• Management considerations, in terms of both the centralised monitoring and control of applications being run, and the flexibility to be provided to the user for configuring their own desktop.
• Relative costs, for example, the cost of bandwidth can vary depending on where the client is located at any point in time.
Given such factors, it’s unlikely that any organisation can arrive at a single desktop configuration that suits all types of users at an appropriate cost. There’s a wealth of options available today, from various flavours of virtualisation (virtual desktop infrastructure, session virtualisation and application streaming, for example), to browser-based interfaces onto in-house and hosted applications. As smart phones get smarter, and new form factors such as net books and tablets start to emerge, the range of options increases still further. As a result, it can become quite bewildering to decide where workloads and data should actually exist, whether or not direct user intervention is required.
Faced with this broadening catalogue of possibilities, our advice is quite simple: start with business users, their needs and the constraints they face. Users tend to fall into a reasonably small set of categories dependent on their jobs, their working practices and constraints – for example if they work from home or in the office, whether they handle sensitive data and so on. We’ve used such categorisations as the basis for data cuts in our research, and we know from experience just how useful they can be when it comes to identifying better-bounded groups whose needs can be dealt with specifically.
While user categorisation offers a starting point, the second ‘gotcha’ concerns future safety. The law of unintended usage models comes into play here – from my days as an IT manager, I am quite familiar with thinking, “But that’s not how it is supposed to be used!” Sometimes this may be due to users doing things without thinking about the consequences – we heard one story about a user streaming catch-up TV shows via their VPN link onto their virtual desktop for example, clogging up both network bandwidth and server resources.
It would be all too easy to size a desktop environment for a given set of users who appear to have relatively modest processing or networking requirements, only to find that they quickly become inadequate. On other occasions, it can be parallel initiatives, for example rolling out unified communications technologies that prove too much for the architecture as defined. In either case, it will be the help desk that suffers, so it is worth working through a few scenarios and keeping tabs on other projects, to ensure that the potential for such risks is minimised.
We may still be reliant on the von Neumann architecture when it comes to computing. Whether or not this changes, and whatever compute models emerge in the future, the advice to put the business first will remain. ®
Two articles for the price of one?
Utterly misleading title on this article - we get a few paragraphs on von Neumann, then he goes off topic onto a how-to-spec-your-business-desktops article!
Combining computing science theory with business advice seemingly doesn't work very well.
So your employers saves a ton of money on your wage while shelling out lots of pounds for specialized tools.
How much money does your company spend on sw licenses each year ?
Well, speaking as the guy who does the IT for a small business...
I have no idea what you just typed.
But I find the wizards and active directory in SBS 2003 easy to use thank you.
I was hoping for something more interesting - from a technical point of view, that is.
A much more serious aspect of von Neumann Archtecture
A fundamental attribute of the von Neumann architecture this paper doesn't mention is that a common memory array contains both instruction codes and data. The decision as to whether a word fetched by the processor is to be interpreted as an instruction or as data depends entirely on the previous state of the machine - if the last fetch was the parameter of an instruction, this fetch is an instruction and so on. This represents a huge security vulnerability that has been systematically exploited in many ways for many years - "buffer overflow" and "stack overflow" attacks that cause maliciously injected data to be interpreted as machine instructions dominate the professional attack space. But even accidental loss of instruction pointer integrity can be extremely damaging - causing uncontrolled execution of arbitrary instructions, and it does happen, as in "hey, my machine locked up!".
The major contender architecture - Harvard - has separate instruction and data memories, and is widely used in industrial controllers, for the very reason that they have to be robust. Harvard architecture didn't take off in the office computer space due to the initial high cost of memory, but that's not been a major consideration for some time. I've been waiting for years for a Harvard architecture PC CPU, but in vain. Even a dual-stack operating system that segregated function call and return addresses from function parameters would be a huge step forward, even if it ran on a vN CPU. But nothing's being done. Instead we have numerous questionable sticking plasters such as random memory allocation, stack validation et al, which regularly prove their ineffectiveness due to the extent of the underlying festering wound - an almost unsecurable architecture. von Neumann was not considering security when he came up with his computing model.