Feeds

Exploiting the mainframe for new workload requirements

Fully exploiting that valuable asset

  • alert
  • submit to reddit

The essential guide to IT transformation

IT architects and CIOs have a number of factors to take into consideration when it comes to selecting where to run workloads and how to design systems for efficient operations over extended periods of time.

Chief amongst these are the nature of the workloads themselves, the operating systems on which they are supported and the middleware they require in order to function. These may in turn dictate the hardware platforms on which they could function. Ultimately, everything should relate directly to business requirements.

When looking at platform "hardware" selection, the choices that are front of mind for new applications are typically based on x86 or some kind of RISC-based architecture. If a mainframe is in place, that might be considered, but it is often assumed that it is there primarily to run traditional applications that are native to that platform. However, with mainframes such as the IBM System z now capable of supporting Windows and Linux, and even the native environment now supporting modern techniques, standards and programming environments, it makes sense to include the 'big iron' option when looking at new application requirements too.

In order to evaluate the best options when placing workloads, it is also essential to consider the matters of where data currently resides, by system and geography, along with the interfaces available to facilitate systems interoperability as well as looking closely at what workload management tools are available, if any, to handle operations across multiple platforms. The role of Standards and the openness of platforms, especially around data integration and access is becoming important to be certain that workloads can be moved effectively around the broader infrastructure.

So, if you have a System z environment, which after all represents a significant investment and a high value asset, how do you assess whether it makes sense to drive additional returns by deploying new workloads on it?

One thing to bear in mind is that whilst people like the idea of making logical decisions based on objective criteria, it is fair to say that many choices, in all areas of business (not just IT), are made using less than complete sets of considerations. In addition, people being what they are, some of the justification may be made using 'convenient' selective evidence or judgements and weightings that may be more than a little subjective. For the purposes of this discussion however, let's assume you want to make the right decisions for the right reasons. With this in mind, what is required is to build is an application architecture that delivers the information users require, whenever and wherever they need it without being overly complex to manage or difficult to secure. A key question here is whether a given workload is best suited to run on a mainframe, on a hybrid mainframe / open systems platform, or purely in an open systems environment?

This is no easy decision, especially as the mainframe itself, especially in the shape of the IBM System z, now has the ability to host not only traditional z/OS workloads but also those that run on Linux and Unix platforms. It will also, in the near future, support Microsoft Windows environments through the use of a variety of offload engines. However, there are some rules of thumb that can help.

For example, situations that point towards a System z approach include:

  • Where significant sources of data (e.g. data warehouses, transactional, operational data stores etc.) are held in System z data sources including DB2, VSAM, IMS amongst others;
  • There are existing System z and associated skills available and the organisation is prepared to continue to invest in them/expand them;
  • Mission critical situations where “Management”, “Security” and “Risk” drive application platforming policies;
  • Organisations where System z is operationally connected to major data repositories;
  • Scenarios with highly variable workload demand;
  • Where continuous access to data resources and reports is essential for people, other systems and business processes to operate effectively.

Operational situations where combining a mainframe System with open systems in a hybrid approach hybrid approach might be appropriate include:

  • Systems where the majority of data sources and business information is held on a variety of platforms including mainframes, Unix / Linux and Windows systems;
  • When geographic distribution significantly improves performance for users who are remote from centralised mainframe resources;
  • When a cost/benefit analysis determines that the complexity of a multi-platform environment is offset by the mixed price/performance profiles of the systems involved. In these situations it is now possible that use of mainframe offload engines could provide an alternative to traditional hybrid approaches.

It should also be borne in mind that employing a hybrid delivery model can make sense in scenarios where workloads span a number of platforms but where it is important to deliver high quality of service. Such situations are becoming more common as composite applications are created reusing pre-existing functionality already in place in different applications or data stores. The mainframe is now a pretty good citizen and can play a full and often central role in an SOA environment.

But forcing an solution where it doesn’t fit applies as equally to the mainframe as to other platforms., There are IT solution scenarios where it is clear that, outside of exceptional circumstances, making use of a mainframe approach does not make sense. We won’t go into detail here, as architects generally don’t have a problem dismissing the mainframe option; suffice it to say that there will be many situations in which placing workloads on distributed platforms is clearly the correct approach to take.

In all scenarios there are likely to be multiple deployment options available for workload platform selections and no system will be a perfect match for everything. The important thing is to ensure that all appropriate options are given due consideration rather than simply deploying workloads without active thought or because "that's the way we have always done this". ®

Boost IT visibility and business value

More from The Register

next story
Pay to play: The hidden cost of software defined everything
Enter credit card details if you want that system you bought to actually be useful
Shoot-em-up: Sony Online Entertainment hit by 'large scale DDoS attack'
Games disrupted as firm struggles to control network
HP busts out new ProLiant Gen9 servers
Think those are cool? Wait till you get a load of our racks
Silicon Valley jolted by magnitude 6.1 quake – its biggest in 25 years
Did the earth move for you at VMworld – oh, OK. It just did. A lot
VMware's high-wire balancing act: EVO might drag us ALL down
Get it right, EMC, or there'll be STORAGE CIVIL WAR. Mark my words
Forrester says it's time to give up on physical storage arrays
The physical/virtual storage tipping point may just have arrived
prev story

Whitepapers

Top 10 endpoint backup mistakes
Avoid the ten endpoint backup mistakes to ensure that your critical corporate data is protected and end user productivity is improved.
Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
Backing up distributed data
Eliminating the redundant use of bandwidth and storage capacity and application consolidation in the modern data center.
The essential guide to IT transformation
ServiceNow discusses three IT transformations that can help CIOs automate IT services to transform IT and the enterprise
Next gen security for virtualised datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.