Feeds

Exploiting the mainframe for new workload requirements

Fully exploiting that valuable asset

  • alert
  • submit to reddit

Beginner's guide to SSL certificates

IT architects and CIOs have a number of factors to take into consideration when it comes to selecting where to run workloads and how to design systems for efficient operations over extended periods of time.

Chief amongst these are the nature of the workloads themselves, the operating systems on which they are supported and the middleware they require in order to function. These may in turn dictate the hardware platforms on which they could function. Ultimately, everything should relate directly to business requirements.

When looking at platform "hardware" selection, the choices that are front of mind for new applications are typically based on x86 or some kind of RISC-based architecture. If a mainframe is in place, that might be considered, but it is often assumed that it is there primarily to run traditional applications that are native to that platform. However, with mainframes such as the IBM System z now capable of supporting Windows and Linux, and even the native environment now supporting modern techniques, standards and programming environments, it makes sense to include the 'big iron' option when looking at new application requirements too.

In order to evaluate the best options when placing workloads, it is also essential to consider the matters of where data currently resides, by system and geography, along with the interfaces available to facilitate systems interoperability as well as looking closely at what workload management tools are available, if any, to handle operations across multiple platforms. The role of Standards and the openness of platforms, especially around data integration and access is becoming important to be certain that workloads can be moved effectively around the broader infrastructure.

So, if you have a System z environment, which after all represents a significant investment and a high value asset, how do you assess whether it makes sense to drive additional returns by deploying new workloads on it?

One thing to bear in mind is that whilst people like the idea of making logical decisions based on objective criteria, it is fair to say that many choices, in all areas of business (not just IT), are made using less than complete sets of considerations. In addition, people being what they are, some of the justification may be made using 'convenient' selective evidence or judgements and weightings that may be more than a little subjective. For the purposes of this discussion however, let's assume you want to make the right decisions for the right reasons. With this in mind, what is required is to build is an application architecture that delivers the information users require, whenever and wherever they need it without being overly complex to manage or difficult to secure. A key question here is whether a given workload is best suited to run on a mainframe, on a hybrid mainframe / open systems platform, or purely in an open systems environment?

This is no easy decision, especially as the mainframe itself, especially in the shape of the IBM System z, now has the ability to host not only traditional z/OS workloads but also those that run on Linux and Unix platforms. It will also, in the near future, support Microsoft Windows environments through the use of a variety of offload engines. However, there are some rules of thumb that can help.

For example, situations that point towards a System z approach include:

  • Where significant sources of data (e.g. data warehouses, transactional, operational data stores etc.) are held in System z data sources including DB2, VSAM, IMS amongst others;
  • There are existing System z and associated skills available and the organisation is prepared to continue to invest in them/expand them;
  • Mission critical situations where “Management”, “Security” and “Risk” drive application platforming policies;
  • Organisations where System z is operationally connected to major data repositories;
  • Scenarios with highly variable workload demand;
  • Where continuous access to data resources and reports is essential for people, other systems and business processes to operate effectively.

Operational situations where combining a mainframe System with open systems in a hybrid approach hybrid approach might be appropriate include:

  • Systems where the majority of data sources and business information is held on a variety of platforms including mainframes, Unix / Linux and Windows systems;
  • When geographic distribution significantly improves performance for users who are remote from centralised mainframe resources;
  • When a cost/benefit analysis determines that the complexity of a multi-platform environment is offset by the mixed price/performance profiles of the systems involved. In these situations it is now possible that use of mainframe offload engines could provide an alternative to traditional hybrid approaches.

It should also be borne in mind that employing a hybrid delivery model can make sense in scenarios where workloads span a number of platforms but where it is important to deliver high quality of service. Such situations are becoming more common as composite applications are created reusing pre-existing functionality already in place in different applications or data stores. The mainframe is now a pretty good citizen and can play a full and often central role in an SOA environment.

But forcing an solution where it doesn’t fit applies as equally to the mainframe as to other platforms., There are IT solution scenarios where it is clear that, outside of exceptional circumstances, making use of a mainframe approach does not make sense. We won’t go into detail here, as architects generally don’t have a problem dismissing the mainframe option; suffice it to say that there will be many situations in which placing workloads on distributed platforms is clearly the correct approach to take.

In all scenarios there are likely to be multiple deployment options available for workload platform selections and no system will be a perfect match for everything. The important thing is to ensure that all appropriate options are given due consideration rather than simply deploying workloads without active thought or because "that's the way we have always done this". ®

Remote control for virtualized desktops

More from The Register

next story
NSA SOURCE CODE LEAK: Information slurp tools to appear online
Now you can run your own intelligence agency
Azure TITSUP caused by INFINITE LOOP
Fat fingered geo-block kept Aussies in the dark
Yahoo! blames! MONSTER! email! OUTAGE! on! CUT! CABLE! bungle!
Weekend woe for BT as telco struggles to restore service
Cloud unicorns are extinct so DiData cloud mess was YOUR fault
Applications need to be built to handle TITSUP incidents
BOFH: WHERE did this 'fax-enabled' printer UPGRADE come from?
Don't worry about that cable, it's part of the config
Stop the IoT revolution! We need to figure out packet sizes first
Researchers test 802.15.4 and find we know nuh-think! about large scale sensor network ops
SanDisk vows: We'll have a 16TB SSD WHOPPER by 2016
Flash WORM has a serious use for archived photos and videos
Astro-boffins start opening universe simulation data
Got a supercomputer? Want to simulate a universe? Here you go
Microsoft adds video offering to Office 365. Oh NOES, you'll need Adobe Flash
Lovely presentations... but not on your Flash-hating mobe
prev story

Whitepapers

Driving business with continuous operational intelligence
Introducing an innovative approach offered by ExtraHop for producing continuous operational intelligence.
Why CIOs should rethink endpoint data protection in the age of mobility
Assessing trends in data protection, specifically with respect to mobile devices, BYOD, and remote employees.
A strategic approach to identity relationship management
ForgeRock commissioned Forrester to evaluate companies’ IAM practices and requirements when it comes to customer-facing scenarios versus employee-facing ones.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
Protecting against web application threats using SSL
SSL encryption can protect server‐to‐server communications, client devices, cloud resources, and other endpoints in order to help prevent the risk of data loss and losing customer trust.