Feeds

The mainframe comes of age ... again?

Approaching the platform

3 Big data security analytics techniques

Economic pressure has led to more finance directors and CFOs scrutinising expenditure to a painstaking level of detail. The aim is to ensure that IT can deliver what the business needs at the lowest cost while still meeting the never-diminishing expectations of the board and shareholders.

As a result, in-depth examinations into service availability, security, IT performance and cost control have in many cases become routine functions. But few organisations have good models for exactly how each component - in a hugely complex infrastructure - is linked to individual business services. This makes it difficult to accurately and effectively evaluate the economics and return of IT platforms.

In the past many assessments of IT have been aimed at evaluating the total cost of ownership of systems and solutions. In reality these often tended to degenerate into simplistic analysis of easily measured and directly attributable acquisition expenses and running costs.

It is only recently that attention has turned to some of the major contributors to operational expenditure, especially those associated with electricity consumption, cooling, building / facilities costs and the manpower required to keep systems running. But this creates challenges.

Grainy picture

In environments built on industry standard components, many of these operational costs are allocated in big buckets, and it is very difficult to allocate them to each system or IT service with any degree of certainty, let alone granularity.

As a consequence, attempts to use forms of resource chargeback against business services delivered are extremely complex, often expensive to perform, and likely to lead to highly political discussions at management / IT meetings. The result is that IT and the business often compromise and adopt an average charge per user that can bear little resemblance to reality as different types of users have wildly divergent usage patterns.

The pressure to model cost of service against usage is certain to increase as organisations seek to make the most of their IT resources by creating highly responsive resource pools (“private cloud” or “dynamic infrastructure”) to minimise IT costs while maximising business value.

Many vendors are looking to add capabilities to measure resource usage more granularly. This is something at which certain platforms, most notably the mainframe, have always excelled. How organisations react as they do get a better handle on cost metrics, especially when considering highly centralised and consolidated yet flexible infrastructure, has yet to play out.

The mainframe is likely to do very well when its power / performance and scalable management are compared to industry standard systems. This is partly a consequence of the platform’s architecture and design, but also down to the fact that mainframes typically run consistently at utilisation levels higher than many other platforms can reach for any sustained period of time.

C-level managers have to make difficult choices when beset with an array of options, and the pressure to justify decisions in terms of monetary factors can be almost overwhelming. With “total cost of ownership” visibility slowly increasing, many platform selection decisions are entering a new phase.

When looking at centralised and consolidated infrastructures the question now is whether the mainframe is worthy of greater consideration than it currently achieves, both if the organisation already has such systems in place but also perhaps as a new investment.

It is clear that getting the skills and tools in place to implement dynamic IT will be a challenge whichever route is taken, and contrary to common perception may even justify the investment in mainframe technology if the organisation does not currently use it.

So, with current trends, is that 40 year-old platform coming of age again? ®

SANS - Survey on application security programs

More from The Register

next story
This time it's 'Personal': new Office 365 sub covers just two devices
Redmond also brings Office into Google's back yard
Dropbox defends fantastically badly timed Condoleezza Rice appointment
'Nothing is going to change with Dr. Rice's appointment,' file sharer promises
Bored with trading oil and gold? Why not flog some CLOUD servers?
Chicago Mercantile Exchange plans cloud spot exchange
Just what could be inside Dropbox's new 'Home For Life'?
Biz apps, messaging, photos, email, more storage – sorry, did you think there would be cake?
IT bods: How long does it take YOU to train up on new tech?
I'll leave my arrays to do the hard work, if you don't mind
Amazon reveals its Google-killing 'R3' server instances
A mega-memory instance that never forgets
Cisco reps flog Whiptail's Invicta arrays against EMC and Pure
Storage reseller report reveals who's selling what
prev story

Whitepapers

Designing a defence for mobile apps
In this whitepaper learn the various considerations for defending mobile applications; from the mobile application architecture itself to the myriad testing technologies needed to properly assess mobile applications risk.
3 Big data security analytics techniques
Applying these Big Data security analytics techniques can help you make your business safer by detecting attacks early, before significant damage is done.
Five 3D headsets to be won!
We were so impressed by the Durovis Dive headset we’ve asked the company to give some away to Reg readers.
The benefits of software based PBX
Why you should break free from your proprietary PBX and how to leverage your existing server hardware.
Securing web applications made simple and scalable
In this whitepaper learn how automated security testing can provide a simple and scalable way to protect your web applications.