Feeds

The mainframe comes of age ... again?

Approaching the platform

The essential guide to IT transformation

Economic pressure has led to more finance directors and CFOs scrutinising expenditure to a painstaking level of detail. The aim is to ensure that IT can deliver what the business needs at the lowest cost while still meeting the never-diminishing expectations of the board and shareholders.

As a result, in-depth examinations into service availability, security, IT performance and cost control have in many cases become routine functions. But few organisations have good models for exactly how each component - in a hugely complex infrastructure - is linked to individual business services. This makes it difficult to accurately and effectively evaluate the economics and return of IT platforms.

In the past many assessments of IT have been aimed at evaluating the total cost of ownership of systems and solutions. In reality these often tended to degenerate into simplistic analysis of easily measured and directly attributable acquisition expenses and running costs.

It is only recently that attention has turned to some of the major contributors to operational expenditure, especially those associated with electricity consumption, cooling, building / facilities costs and the manpower required to keep systems running. But this creates challenges.

Grainy picture

In environments built on industry standard components, many of these operational costs are allocated in big buckets, and it is very difficult to allocate them to each system or IT service with any degree of certainty, let alone granularity.

As a consequence, attempts to use forms of resource chargeback against business services delivered are extremely complex, often expensive to perform, and likely to lead to highly political discussions at management / IT meetings. The result is that IT and the business often compromise and adopt an average charge per user that can bear little resemblance to reality as different types of users have wildly divergent usage patterns.

The pressure to model cost of service against usage is certain to increase as organisations seek to make the most of their IT resources by creating highly responsive resource pools (“private cloud” or “dynamic infrastructure”) to minimise IT costs while maximising business value.

Many vendors are looking to add capabilities to measure resource usage more granularly. This is something at which certain platforms, most notably the mainframe, have always excelled. How organisations react as they do get a better handle on cost metrics, especially when considering highly centralised and consolidated yet flexible infrastructure, has yet to play out.

The mainframe is likely to do very well when its power / performance and scalable management are compared to industry standard systems. This is partly a consequence of the platform’s architecture and design, but also down to the fact that mainframes typically run consistently at utilisation levels higher than many other platforms can reach for any sustained period of time.

C-level managers have to make difficult choices when beset with an array of options, and the pressure to justify decisions in terms of monetary factors can be almost overwhelming. With “total cost of ownership” visibility slowly increasing, many platform selection decisions are entering a new phase.

When looking at centralised and consolidated infrastructures the question now is whether the mainframe is worthy of greater consideration than it currently achieves, both if the organisation already has such systems in place but also perhaps as a new investment.

It is clear that getting the skills and tools in place to implement dynamic IT will be a challenge whichever route is taken, and contrary to common perception may even justify the investment in mainframe technology if the organisation does not currently use it.

So, with current trends, is that 40 year-old platform coming of age again? ®

Boost IT visibility and business value

More from The Register

next story
Pay to play: The hidden cost of software defined everything
Enter credit card details if you want that system you bought to actually be useful
Shoot-em-up: Sony Online Entertainment hit by 'large scale DDoS attack'
Games disrupted as firm struggles to control network
HP busts out new ProLiant Gen9 servers
Think those are cool? Wait till you get a load of our racks
Silicon Valley jolted by magnitude 6.1 quake – its biggest in 25 years
Did the earth move for you at VMworld – oh, OK. It just did. A lot
VMware's high-wire balancing act: EVO might drag us ALL down
Get it right, EMC, or there'll be STORAGE CIVIL WAR. Mark my words
Forrester says it's time to give up on physical storage arrays
The physical/virtual storage tipping point may just have arrived
prev story

Whitepapers

Top 10 endpoint backup mistakes
Avoid the ten endpoint backup mistakes to ensure that your critical corporate data is protected and end user productivity is improved.
Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
Backing up distributed data
Eliminating the redundant use of bandwidth and storage capacity and application consolidation in the modern data center.
The essential guide to IT transformation
ServiceNow discusses three IT transformations that can help CIOs automate IT services to transform IT and the enterprise
Next gen security for virtualised datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.