Feeds

The mainframe comes of age ... again?

Approaching the platform

Protecting against web application threats using SSL

Economic pressure has led to more finance directors and CFOs scrutinising expenditure to a painstaking level of detail. The aim is to ensure that IT can deliver what the business needs at the lowest cost while still meeting the never-diminishing expectations of the board and shareholders.

As a result, in-depth examinations into service availability, security, IT performance and cost control have in many cases become routine functions. But few organisations have good models for exactly how each component - in a hugely complex infrastructure - is linked to individual business services. This makes it difficult to accurately and effectively evaluate the economics and return of IT platforms.

In the past many assessments of IT have been aimed at evaluating the total cost of ownership of systems and solutions. In reality these often tended to degenerate into simplistic analysis of easily measured and directly attributable acquisition expenses and running costs.

It is only recently that attention has turned to some of the major contributors to operational expenditure, especially those associated with electricity consumption, cooling, building / facilities costs and the manpower required to keep systems running. But this creates challenges.

Grainy picture

In environments built on industry standard components, many of these operational costs are allocated in big buckets, and it is very difficult to allocate them to each system or IT service with any degree of certainty, let alone granularity.

As a consequence, attempts to use forms of resource chargeback against business services delivered are extremely complex, often expensive to perform, and likely to lead to highly political discussions at management / IT meetings. The result is that IT and the business often compromise and adopt an average charge per user that can bear little resemblance to reality as different types of users have wildly divergent usage patterns.

The pressure to model cost of service against usage is certain to increase as organisations seek to make the most of their IT resources by creating highly responsive resource pools (“private cloud” or “dynamic infrastructure”) to minimise IT costs while maximising business value.

Many vendors are looking to add capabilities to measure resource usage more granularly. This is something at which certain platforms, most notably the mainframe, have always excelled. How organisations react as they do get a better handle on cost metrics, especially when considering highly centralised and consolidated yet flexible infrastructure, has yet to play out.

The mainframe is likely to do very well when its power / performance and scalable management are compared to industry standard systems. This is partly a consequence of the platform’s architecture and design, but also down to the fact that mainframes typically run consistently at utilisation levels higher than many other platforms can reach for any sustained period of time.

C-level managers have to make difficult choices when beset with an array of options, and the pressure to justify decisions in terms of monetary factors can be almost overwhelming. With “total cost of ownership” visibility slowly increasing, many platform selection decisions are entering a new phase.

When looking at centralised and consolidated infrastructures the question now is whether the mainframe is worthy of greater consideration than it currently achieves, both if the organisation already has such systems in place but also perhaps as a new investment.

It is clear that getting the skills and tools in place to implement dynamic IT will be a challenge whichever route is taken, and contrary to common perception may even justify the investment in mainframe technology if the organisation does not currently use it.

So, with current trends, is that 40 year-old platform coming of age again? ®

Choosing a cloud hosting partner with confidence

More from The Register

next story
Wanna keep your data for 1,000 YEARS? No? Hard luck, HDS wants you to anyway
Combine Blu-ray and M-DISC and you get this monster
US boffins demo 'twisted radio' mux
OAM takes wireless signals to 32 Gbps
'Kim Kardashian snaps naked selfies with a BLACKBERRY'. *Twitterati gasps*
More alleged private, nude celeb pics appear online
Google+ GOING, GOING ... ? Newbie Gmailers no longer forced into mandatory ID slurp
Mountain View distances itself from lame 'network thingy'
Apple flops out 2FA for iCloud in bid to stop future nude selfie leaks
Millions of 4chan users howl with laughter as Cupertino slams stable door
Students playing with impressive racks? Yes, it's cluster comp time
The most comprehensive coverage the world has ever seen. Ever
Run little spreadsheet, run! IBM's Watson is coming to gobble you up
Big Blue's big super's big appetite for big data in big clouds for big analytics
Seagate's triple-headed Cerberus could SAVE the DISK WORLD
... and possibly bring us even more HAMR time. Yay!
prev story

Whitepapers

Secure remote control for conventional and virtual desktops
Balancing user privacy and privileged access, in accordance with compliance frameworks and legislation. Evaluating any potential remote control choice.
WIN a very cool portable ZX Spectrum
Win a one-off portable Spectrum built by legendary hardware hacker Ben Heck
Storage capacity and performance optimization at Mizuno USA
Mizuno USA turn to Tegile storage technology to solve both their SAN and backup issues.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
The next step in data security
With recent increased privacy concerns and computers becoming more powerful, the chance of hackers being able to crack smaller-sized RSA keys increases.