Feeds

Easy to use, virus free, secure: Aaah, how I miss my MAINFRAME

Back when installing drivers was someone else's problem

5 things you didn’t know about cloud backup

Mention mainframe computers today and most people will conjure an image of something like an early analogue synthesiser crossed with a brontosaurus. Think a hulking, room-sized heap of metal and cables with thousands of moving parts that requires an army of people just to keep it plodding along.

A no-name PC today would blow a high-end 1970s mainframe out of the water thanks to the miniaturisation of electronics and vast improvements to performance in the decades since. At the same time, a desktop computer typically has to worry about just one user, its owner: machine cycles don’t have to be shared with potentially hundreds of other people and their processes, and the configuration of one person’s workstation can be completely different from the workstation at the next desk over. This is all a very good thing.

However, some of the “limitations” of mainframes were blessings in disguise.

Mainframe users didn’t need to know or care where the computer was physically located: it could have been, and often was, halfway across the country. It was an abstract thing that just worked, not much different from an electricity utility. You didn't have to pull up a chair to the actual beast, you connected to it remotely.

Developers didn’t have to concern themselves with “maintaining” the machine or peripheral devices such as disk or tape drives, and in fact couldn’t do so if they wanted to. All these things were just there, always “on”. With mainframes, there were well-run, disciplined, knowledgeable teams dedicated full-time to making sure everything was in working order. No one but the operations team had to worry about disk errors or bad memory cards. In short, as a mainframe user you had people watching over you, your data, and your apps. A benign Big Brother who made sure everything was kept humming.

Granted, this is all far too restrictive for 21st century computing needs, and certainly not enough to make anyone wish for a return to the days of the IBM System/360.

But these kinds of “lifestyle” benefits did allow mainframe users to concentrate on more important things. For programmers, as a side-effect, the restrictions of the corporate mainframe environment also prevented certain bad practices and enforced a kind of healthy discipline that to a great extent no longer exists.

Where did I leave that document?

Today developers can, if they want to, build tools and applications on isolated machines, with no checks and balances. With mainframes, applications and data were stored centrally, not on users’ personal desktops. Everything was more or less locatable. Now, it can be impossible. A “find” command run across a network is not very useful if some machines aren’t on the network to begin with, or if the data in question lives on a local drive unshared with the rest of the pack. This makes it easier to hide or bury things, intentionally or unintentionally.

At one bank I worked for, when a certain senior developer left it took months to track down all the mysterious systems and components he'd built because he hadn't told anyone where they resided, exactly what they did, how they worked. A tech manager had to check one machine after another manually until he located all the various applications the former employee had set up.

A related problem that comes with desktop decentralisation is the ability to use the job scheduler cron (or an equivalent) locally. On mainframes there was generally one central scheduler where a system operator could see the details for all batch jobs across users and applications. In the client-server world, job-management packages such as Autosys use databases that similarly live on central servers: Developers and support staff create and modify Autosys jobs via a web app that controls this shared database, and all of these can be browsed and searched. But anyone with a Windows or Linux box, even one connected to a central server, can still schedule private jobs using Task Scheduler or a local crontab file. Not a very rare occurrence.

There may be perfectly reasonable uses for these localised tools, but they’ll be unknown to the official company-wide scheduler and effectively invisible to system administrators. If a developer who has set up local batch jobs leaves the firm, there’s a chance no will even be aware of the existence of these jobs, much less be able to find them.

Boost IT visibility and business value

More from The Register

next story
The Return of BSOD: Does ANYONE trust Microsoft patches?
Sysadmins, you're either fighting fires or seen as incompetents now
Linux turns 23 and Linus Torvalds celebrates as only he can
No, not with swearing, but by controlling the release cycle
China hopes home-grown OS will oust Microsoft
Doesn't much like Apple or Google, either
Sin COS to tan Windows? Chinese operating system to debut in autumn – report
Development alliance working on desktop, mobe software
Apple promises to lift Curse of the Drained iPhone 5 Battery
Have you tried turning it off and...? Never mind, here's a replacement
Eat up Martha! Microsoft slings handwriting recog into OneNote on Android
Freehand input on non-Windows kit for the first time
Linux kernel devs made to finger their dongles before contributing code
Two-factor auth enabled for Kernel.org repositories
This is how I set about making a fortune with my own startup
Would you leave your well-paid job to chase your dream?
prev story

Whitepapers

Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
Endpoint data privacy in the cloud is easier than you think
Innovations in encryption and storage resolve issues of data privacy and key requirements for companies to look for in a solution.
Scale data protection with your virtual environment
To scale at the rate of virtualization growth, data protection solutions need to adopt new capabilities and simplify current features.
Boost IT visibility and business value
How building a great service catalog relieves pressure points and demonstrates the value of IT service management.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?