Feeds

Exploiting the mainframe for new workload requirements

Fully exploiting that valuable asset

  • alert
  • submit to reddit

Internet Security Threat Report 2014

IT architects and CIOs have a number of factors to take into consideration when it comes to selecting where to run workloads and how to design systems for efficient operations over extended periods of time.

Chief amongst these are the nature of the workloads themselves, the operating systems on which they are supported and the middleware they require in order to function. These may in turn dictate the hardware platforms on which they could function. Ultimately, everything should relate directly to business requirements.

When looking at platform "hardware" selection, the choices that are front of mind for new applications are typically based on x86 or some kind of RISC-based architecture. If a mainframe is in place, that might be considered, but it is often assumed that it is there primarily to run traditional applications that are native to that platform. However, with mainframes such as the IBM System z now capable of supporting Windows and Linux, and even the native environment now supporting modern techniques, standards and programming environments, it makes sense to include the 'big iron' option when looking at new application requirements too.

In order to evaluate the best options when placing workloads, it is also essential to consider the matters of where data currently resides, by system and geography, along with the interfaces available to facilitate systems interoperability as well as looking closely at what workload management tools are available, if any, to handle operations across multiple platforms. The role of Standards and the openness of platforms, especially around data integration and access is becoming important to be certain that workloads can be moved effectively around the broader infrastructure.

So, if you have a System z environment, which after all represents a significant investment and a high value asset, how do you assess whether it makes sense to drive additional returns by deploying new workloads on it?

One thing to bear in mind is that whilst people like the idea of making logical decisions based on objective criteria, it is fair to say that many choices, in all areas of business (not just IT), are made using less than complete sets of considerations. In addition, people being what they are, some of the justification may be made using 'convenient' selective evidence or judgements and weightings that may be more than a little subjective. For the purposes of this discussion however, let's assume you want to make the right decisions for the right reasons. With this in mind, what is required is to build is an application architecture that delivers the information users require, whenever and wherever they need it without being overly complex to manage or difficult to secure. A key question here is whether a given workload is best suited to run on a mainframe, on a hybrid mainframe / open systems platform, or purely in an open systems environment?

This is no easy decision, especially as the mainframe itself, especially in the shape of the IBM System z, now has the ability to host not only traditional z/OS workloads but also those that run on Linux and Unix platforms. It will also, in the near future, support Microsoft Windows environments through the use of a variety of offload engines. However, there are some rules of thumb that can help.

For example, situations that point towards a System z approach include:

  • Where significant sources of data (e.g. data warehouses, transactional, operational data stores etc.) are held in System z data sources including DB2, VSAM, IMS amongst others;
  • There are existing System z and associated skills available and the organisation is prepared to continue to invest in them/expand them;
  • Mission critical situations where “Management”, “Security” and “Risk” drive application platforming policies;
  • Organisations where System z is operationally connected to major data repositories;
  • Scenarios with highly variable workload demand;
  • Where continuous access to data resources and reports is essential for people, other systems and business processes to operate effectively.

Operational situations where combining a mainframe System with open systems in a hybrid approach hybrid approach might be appropriate include:

  • Systems where the majority of data sources and business information is held on a variety of platforms including mainframes, Unix / Linux and Windows systems;
  • When geographic distribution significantly improves performance for users who are remote from centralised mainframe resources;
  • When a cost/benefit analysis determines that the complexity of a multi-platform environment is offset by the mixed price/performance profiles of the systems involved. In these situations it is now possible that use of mainframe offload engines could provide an alternative to traditional hybrid approaches.

It should also be borne in mind that employing a hybrid delivery model can make sense in scenarios where workloads span a number of platforms but where it is important to deliver high quality of service. Such situations are becoming more common as composite applications are created reusing pre-existing functionality already in place in different applications or data stores. The mainframe is now a pretty good citizen and can play a full and often central role in an SOA environment.

But forcing an solution where it doesn’t fit applies as equally to the mainframe as to other platforms., There are IT solution scenarios where it is clear that, outside of exceptional circumstances, making use of a mainframe approach does not make sense. We won’t go into detail here, as architects generally don’t have a problem dismissing the mainframe option; suffice it to say that there will be many situations in which placing workloads on distributed platforms is clearly the correct approach to take.

In all scenarios there are likely to be multiple deployment options available for workload platform selections and no system will be a perfect match for everything. The important thing is to ensure that all appropriate options are given due consideration rather than simply deploying workloads without active thought or because "that's the way we have always done this". ®

Internet Security Threat Report 2014

More from The Register

next story
Docker's app containers are coming to Windows Server, says Microsoft
MS chases app deployment speeds already enjoyed by Linux devs
IBM storage revenues sink: 'We are disappointed,' says CEO
Time to put the storage biz up for sale?
'Hmm, why CAN'T I run a water pipe through that rack of media servers?'
Leaving Las Vegas for Armenia kludging and Dubai dune bashing
Facebook slurps 'paste sites' for STOLEN passwords, sprinkles on hash and salt
Zuck's ad empire DOESN'T see details in plain text. Phew!
Windows 10: Forget Cloudobile, put Security and Privacy First
But - dammit - It would be insane to say 'don't collect, because NSA'
Symantec backs out of Backup Exec: Plans to can appliance in Jan
Will still provide support to existing customers
VMware's tool to harden virtual networks: a spreadsheet
NSX security guide lands in intriguing format
prev story

Whitepapers

Forging a new future with identity relationship management
Learn about ForgeRock's next generation IRM platform and how it is designed to empower CEOS's and enterprises to engage with consumers.
Cloud and hybrid-cloud data protection for VMware
Learn how quick and easy it is to configure backups and perform restores for VMware environments.
Three 1TB solid state scorchers up for grabs
Big SSDs can be expensive but think big and think free because you could be the lucky winner of one of three 1TB Samsung SSD 840 EVO drives that we’re giving away worth over £300 apiece.
Reg Reader Research: SaaS based Email and Office Productivity Tools
Read this Reg reader report which provides advice and guidance for SMBs towards the use of SaaS based email and Office productivity tools.
Security for virtualized datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.