The workload challenge
Have mainframes still got a job?
Broadcast The mainframe was there first, but is it the dinosaur that many people assume?
For some workloads, it has never been bettered: many of today's business web sites store a production database on a mainframe host, for example.
For applications that rely on large-scale transaction processing, that support thousands of users, manage terabytes of information or handle large-bandwidth connections; for businesses that need high quality of service and resilient security, often the mainframe architecture is still the best choice, and offers the best return on investment.
But doesn't that type of heavy lifting describe more of our IT tasks? If so, could the Big Unit be due a renaissance?
In our Regcast Tim Phillips is joined by Ray Jones, IBM VP of Worldwide zSeries Software - translation, "Mr Mainframe" - and Freeform Dynamics' Dr Stats Tony Lock - a man who who knows all about the subject from many years of practical experience - to discuss where a mainframe is and isn't best, whether you might need one in the future, how you make the transition - and whether our preconceptions sometimes stop us from choosing the best IT architecture for the job.
The event is on the 7th of April at 11:00 BST.
If you've been wondering about how to manage your workloads efficiently, or about how you might get more from your mainframe, you can join us for free right here.
I'd just like to point out
that AT&T up until the late '80s used to run a large part of their environment around mainframes, many running UNIX! And you probably ought to look up other non-IBM OS's for 370 architecture systems as well. One of my personal favourites was MTS. I saw a demonstration of access to ARPANET (you know, a forerunner of the Internet) from this OS in the very early '80s. Also, for all it's problems, the influential OS Multics was a mainframe OS, and this set features that would appear in UNIX, VMS and a host of other OS's long forgotten.
I was involved with installing and running a channel-based Ethernet device running TCP/IP on a mainframe linking it to Sun and VAX systems in the later '80s (again, under UNIX).
I think that one needs to separate the hardware from the software, as there is a significant difference.
Mind you, if you look as some of the innovations, such as virtual addressing, virtualised systems, key based page level memory protection, I/O offload, multi-processor systems, distributed processing, hierarchical storage controllers, DMA, memory cache, multi-user and multi-tasking, use of ASCII (one of your benchmarks, ASCII was mandated by US government contracts in 1968, and before this was a COMMUNICATION standard, not a COMPUTING one), microcode, solid-state electronics and a host of more minor things, mainframe was often one of the first systems to implement them (often because the features were so expensive to implement, only mainframe-class machines could benefit).
Whilst many of these were not invented on the 360/370....zSeries systems (now the only real mainframe architecture remaining), they were almost all pioneered on mainframe-class systems like Atlas, KDF/9, Cyber/CDC, UNIVAC and others.
COBOL, Assembler and Mainframes ARE sexy.
The mainframe is a very much missunderstood beast, the technology has been confused with the boring data processing departments that it tends to live in and many IT people in this day and age are simply ignorant of the technology. Whetever midrange technology you can think of, it was there on mainframes decades before anywhere else, distributed transactions, message queueing, stateless sessions, virtualisation etc etc etc. Unfortunatly due to the roots in data processing, mainframes have tended to remain COBOL or assember based etc so it's just not sexy enough for most.