Original URL: https://www.theregister.com/2012/11/16/end_of_mainframe_discipline/

Easy to use, virus free, secure: Aaah, how I miss my MAINFRAME

Back when installing drivers was someone else's problem

By Dave Mandl

Posted in Software, 16th November 2012 11:02 GMT

Mention mainframe computers today and most people will conjure an image of something like an early analogue synthesiser crossed with a brontosaurus. Think a hulking, room-sized heap of metal and cables with thousands of moving parts that requires an army of people just to keep it plodding along.

A no-name PC today would blow a high-end 1970s mainframe out of the water thanks to the miniaturisation of electronics and vast improvements to performance in the decades since. At the same time, a desktop computer typically has to worry about just one user, its owner: machine cycles don’t have to be shared with potentially hundreds of other people and their processes, and the configuration of one person’s workstation can be completely different from the workstation at the next desk over. This is all a very good thing.

However, some of the “limitations” of mainframes were blessings in disguise.

Mainframe users didn’t need to know or care where the computer was physically located: it could have been, and often was, halfway across the country. It was an abstract thing that just worked, not much different from an electricity utility. You didn't have to pull up a chair to the actual beast, you connected to it remotely.

Developers didn’t have to concern themselves with “maintaining” the machine or peripheral devices such as disk or tape drives, and in fact couldn’t do so if they wanted to. All these things were just there, always “on”. With mainframes, there were well-run, disciplined, knowledgeable teams dedicated full-time to making sure everything was in working order. No one but the operations team had to worry about disk errors or bad memory cards. In short, as a mainframe user you had people watching over you, your data, and your apps. A benign Big Brother who made sure everything was kept humming.

Granted, this is all far too restrictive for 21st century computing needs, and certainly not enough to make anyone wish for a return to the days of the IBM System/360.

But these kinds of “lifestyle” benefits did allow mainframe users to concentrate on more important things. For programmers, as a side-effect, the restrictions of the corporate mainframe environment also prevented certain bad practices and enforced a kind of healthy discipline that to a great extent no longer exists.

Where did I leave that document?

Today developers can, if they want to, build tools and applications on isolated machines, with no checks and balances. With mainframes, applications and data were stored centrally, not on users’ personal desktops. Everything was more or less locatable. Now, it can be impossible. A “find” command run across a network is not very useful if some machines aren’t on the network to begin with, or if the data in question lives on a local drive unshared with the rest of the pack. This makes it easier to hide or bury things, intentionally or unintentionally.

At one bank I worked for, when a certain senior developer left it took months to track down all the mysterious systems and components he'd built because he hadn't told anyone where they resided, exactly what they did, how they worked. A tech manager had to check one machine after another manually until he located all the various applications the former employee had set up.

A related problem that comes with desktop decentralisation is the ability to use the job scheduler cron (or an equivalent) locally. On mainframes there was generally one central scheduler where a system operator could see the details for all batch jobs across users and applications. In the client-server world, job-management packages such as Autosys use databases that similarly live on central servers: Developers and support staff create and modify Autosys jobs via a web app that controls this shared database, and all of these can be browsed and searched. But anyone with a Windows or Linux box, even one connected to a central server, can still schedule private jobs using Task Scheduler or a local crontab file. Not a very rare occurrence.

There may be perfectly reasonable uses for these localised tools, but they’ll be unknown to the official company-wide scheduler and effectively invisible to system administrators. If a developer who has set up local batch jobs leaves the firm, there’s a chance no will even be aware of the existence of these jobs, much less be able to find them.

All that code wrote last night before checking it in? Well, your dying C:\ ate it

Working locally on a desktop workstation means being excluded from various safety nets that are provided by centrally managed networks and were literally unavoidable with mainframes. The most obvious of these is backups. Since there was no hiding from the central storage devices connected to a mainframe, everything you did, every line of code you wrote and every database record you stored, was backed up at least nightly and sometimes as often as hourly.

Today many users and even developers store data, whether work-in-progress or live applications and databases, on local C: drives. Even “temporary” storage of files on a local drive has a way of becoming permanent through neglect or laziness. Needless to say, having one un-backed-up copy of any file is living on the edge: One wrong keystroke on the command line is all it takes to vaporise a lot of work before you could check it into the version control system.

One under-the-radar use of desktop workstations that is the bane of system administrations and tech managers everywhere is the installation of unauthorised software.

The most obvious risk is malware. Viruses and worms unintentionally installed on your machine and possibly spread from there throughout the corporate network. (This brings together the worst of both worlds: the ability of PC users to load software directly onto their own machines and the ability of malware to cause widespread damage when installed on a networked computer.)

Far less devastating, but still a potential source of problems if you’re part of a much larger IT group, is the introduction of legitimate software that deviates from the company’s standards. This is not to advocate blind obedience to the arbitrary, if not outright ridiculous, rules often set down by management. But when everyone at your firm is using CVS for source control and you (or even you and three colleagues) make an impulsive decision to switch to Git, chaos ensues.

Is that a USB stick in your pocket?

A more common situation is one where a developer loads an obscure and possibly questionable software package or compiler onto a local PC. This can lead to ugly infrastructure splits, where one person has unilaterally decided to develop an application using a pet language or package that no one else in the group knows or, worse, introduces incompatibilities with other apps. Naturally this situation can also arise with networked workstations, where the unauthorised software is downloaded from the web and installed on the corporate network—obviously a much more common and easier approach than loading software locally from a CD or flash drive.

But with mainframes, the in-house machine was effectively walled off, and you were required to go through proper channels to get something like a new compiler installed. A potential bureaucratic nightmare, certainly, but one that at least ensured people were adhering to standards and using well-tested software.

Just as local drives allow PC users to introduce arbitrary software and data onto a corporate network, they also allow the opposite. If you’re a system administrator concerned about the possible theft of sensitive code or information, the ability of users to download gigabytes of data onto a USB drive with no effort probably keeps you awake at night. Again, theft or unauthorised distribution of code is far easier via email or FTP, though any competent network administrator can detect that. But either way, with a network that is essentially wide open to the outside world, and ports that allow data to be copied to a device smaller than your palm, pretty much any collection of bits can leave the building easily. Score another point for mainframes.

The development of tiny, powerful, and fully user-configurable computing devices (and, for that matter, the microchip itself) represented a leap forward that no reasonable person would want to undo, and I admit that my defence here of the glory days of the mainframe is somewhat tongue-in-cheek. Though I do occasionally get nostalgic for the gargantuan Amdahl 470 I developed systems on at my first job.

However, it’s silly to think technology can regularly take five steps forward without even one step back. There were benefits to working on mainframes: a full-service environment where round-the-clock teams took care of virtually all support, allowing application developers to focus on their real work; the freedom to be blissfully unaware of how the 20,000-kilogram computer, six-foot-tall tape drives, and other monstrosities in the computer room operated; and, mostly, enforcement of various kinds of discipline in building and maintaining code that are often non-existent today.

There’s no reason developers can’t be just as disciplined in the post-mainframe era, but rather than being forced to by the rigid topology of the development environment, they have to rely on themselves and their human colleagues to keep each other honest. Good luck. ®