Feeds

How to put the data centre back together again

Sharing brings harmony

Combat fraud and increase customer satisfaction

The world of business is becoming faster, more competitive and ever more dependent on IT. Transactions and interactions that used to be handled manually between suppliers, the channel and customers are now largely electronic.

Most of the day-to-day operations and communications of the business rely on IT services and any failures have the potential to bring activities to a grinding halt.

The old saying that “if there is one constant in IT, it is change” is truer today than ever before. Yet most IT departments struggle to deliver when the business asks for something new.

This is is likely to become a lot more acute in the future as businesses come under more competitive pressure, while a raft of external services such as software-as-a-service and public cloud put the performance of IT in the spotlight.

Fight the fragmentation

For the business to thrive, the data centre of the future will need to be dynamic and able to reconfigure much more rapidly and seamlessly. This will depend on all parts of the data centre – from the applications and servers through to storage and networking – being part of an integrated whole and pulling their weight when it comes to change.

Yet in many companies that we speak to, these critical infrastructure elements are usually procured, managed and run separately, resulting in a fragmented infrastructure and significant, often unrecognised, burdens.

If we look at how the IT organisations that are most responsive to change organise things, we see that the tendency is towards tighter integration (figure 1).

Figure 1

Turning things around so that the data centre infrastructure is more converged and better integrated is often easier said than done. However, by recognising the problem and tackling it in stages, many benefits can be realised, and fairly early on too.

Most of the companies that we survey define what what IT does by thinking about implementing individual systems and applications.

Both IT and the business think in terms of specific servers or software applications, such as Exchange Server, rather than a generic service, such as email, being delivered.

Virtualisation rules

This worked reasonably well when systems were pretty much static and each could be individually architected and optimised for a task. But time and technology have moved on. Virtualisation has taken hold, first in servers and now increasingly in both storage and networking too.

What was once stable and predictable has become a lot more dynamic, complex and variable. Doing things in the usual way may work for a while, but as more use is made of virtualisation, the problems can begin to mount up rapidly. The situation may end up just as challenging as before, but with a different set of problems.

So while virtualisation is an important enabler, it is not in itself enough to create an integrated environment.

Think services, not systems

One of the best places to start when it comes to integrating and improving the IT infrastructure is to turn from thinking about individual systems to instead thinking about the services that the business requires.

Our research shows that the IT organisations that take this approach generally become better aligned with the needs of the business, are more responsive to changing requirements and tend to have higher levels of end-user and management satisfaction.

Adopting a service-led mindset can mean some fundamental changes and may appear daunting at first. But often it is just a case of having the confidence to take the first step. Start small and build the capability over time.

Just documenting important services can deliver tangible benefits

Many companies don’t even have a basic catalogue of services. As a result they can’t see what is involved in delivering a service and are not able to troubleshoot if something goes wrong.

Just documenting important services, even if it is in a basic tool such as Excel or Visio, can deliver tangible benefits in understanding and optimising day-to-day operations.

As service delivery becomes more familiar and natural, it makes sense to manage it more formally. Dedicated tools can help here, and the choice of these has grown in recent years. There are integrated tools that come with servers and systems, as well as third-party offerings that may require more investment and integration but also ultimately provide more capability.

One of the big benefits of a service-centric approach is that when the business requests something new, the discussion is about the expected outcome, not the delivery mechanism.

The focus should be on how many users are supported, what response times are acceptable and how much downtime is accepted rather than which processor or how much memory is in a particular box.

This can help to break the link that often develops between business budget holders and the operation of the various systems by the IT department. These links are greatly responsible for the emergence of silos in the data centre and get in the way of being able to make the case for a shared, dynamic infrastructure.

The talking cure

One of the major barriers to a more dynamic and responsive IT infrastructure, and ultimately visions like private cloud, is that the applications, servers, storage and networking are usually bought, implemented and operated in silos that are independent of each other.

This may work well for each particular silo from a technology point of view, but sooner or later the individual elements need to be unified so that they fit and work well together towards the common goal of supporting the business.

Some companies can afford to do this all at once as part of a transformation initiative, but the vast majority can’t. Instead they need to work on incrementally improving what they already have in place – across the teams, the tools and the infrastructure (figure 2).

Figure 2

Although it might be tempting to think that buying some new technology will help with the task of integration, often a better place to start is to get the different teams responsible for servers, storage and networking to interact more closely with each other.

It may be difficult, and politically charged, to just merge teams, given their historical responsibilities and skills. You could train them to become multi-skilled, but often just giving different team members the opportunity to talk is enough.

A simple but effective method may be to seat the teams in closer proximity so that they bump into each other more often.

Join the dots

Bringing teams closer together on its own, however, can only get you so far. The big issue in many companies is the failure to invest in a joined up management approach

Most companies rely on the default tools that come with the systems that they buy, and never invest to get them working together.

The result is typically a fragmented patchwork of systems and tools that make change and day-to-day management difficult. The main issues are summarised in the chart below (figure 3).

Figure 3

We saw earlier in figure 1 that about four out of five companies use separate tools to manage their servers, storage and networking. You would think that the overhead of fragmentation would have IT crying out for better management tools.

The reality, given the pressures on budgets and time, is that few IT departments have the luxury of standing back and realising this because they are too busy juggling everything else that is going on.

Funding is spent on alleviating day-to-day operational burdens, rather than invested in integrated management that would help eliminate the root cause (figure 4).

Figure 4

In deciding how to go about achieving integrated management, there are two main approaches. The first is to have a consolidated set of tools, using a single main management suite whenever possible. Supplementary tools are used only where they bring additional benefits.

The second approach involves regularly investing in keeping the management tools up to date and integrated.

Making the case for investing in joined-up management is critical if infrastructure barriers are to be broken down. Having to look for finance across multiple business units can be difficult when projects are funded directly and senior managers do not appreciate the link between integrated management and the ability of IT to deliver better services.

Neat stacks

So far we have looked at service delivery approaches, operations team integration and joined up management, but not yet touched on the stuff that actually does the work.

In this age of virtualisation it may be tempting to think that physical hardware is increasingly irrelevant, but the reality is that getting the hardware right, together with virtualisation, is the approach that brings most value.

The hardware in the data centre is a complex environment. Servers, storage and networking groups have their own sets of vendors, roadmaps and purchasing cycles. Each group has to operate within limiting boundaries as the elements are not necessarily co-ordinated or complementary.

As teams and tools become more integrated, product choice and selection may in time become more aligned to the bigger picture.

The products may require significant investment to get them to work

But the risk is that the vendors playing in the different segments have diverging priorities and agendas, with the result that the products may be patently unsuitable, or may require significant investment to get them to work with the rest of the infrastructure.

An upcoming trend is to have a pre-integrated stack that provides server, storage and networking components all included together with management software that covers them all.

Oracle, Microsoft, Dell, HP, IBM, Cisco and others are all pushing their capabilities here.

This helps to ensure that the various parts play nicely together and that features required for certain activities are all present and correct throughout the stack.

Where there is an existing, overarching management framework, the integrated tools of the stack should be able to tie into existing management tools.

A big advantage of the integrated stack is in time to value and simplified operations and support. The downside is that it is often limited to specific products from a single vendor, or a small pool of vendors.

For these integrated stacks to be really valuable, they also need to have open interfaces at each level, so that third-party equipment can plug into the stack and participate as a good neighbour while achieving most, if not all, of the benefits of the integrated stack.

A certain amount of integration pain is likely to be involved to get it all to hang together properly, but that is the price tof choice and flexibility.

Baby steps

Everything from hardware integration through management to operations and service delivery helps to break down the barriers that exist in most data centres so that IT can be more responsive and dynamic in supporting the business.

The main thing is to realise that every little helps. Prioritising and tackling some of the problems can lead to very real benefits, often without having to spend a huge amount.

Natural upgrade cycles and new projects are an ideal opportunity to raise the bigger picture and make the case for additional investment in shared services such as management and orchestration. Product evolution can help, much as a rising tide raises all boats.

More features, but crucially also wider integration, are being included in mainstream systems management products. This can help reduce the cost associated with buying and integrating multiple third-party products.

Investing the time and money in getting trained up and familiar with the new capabilities may be the best money you can spend in the next year. ®

  • Andrew Buss is service director at Freeform Dynamics

3 Big data security analytics techniques

More from The Register

next story
This time it's 'Personal': new Office 365 sub covers just two devices
Redmond also brings Office into Google's back yard
Kingston DataTraveler MicroDuo: Turn your phone into a 72GB beast
USB-usiness in the front, micro-USB party in the back
AMD's 'Seattle' 64-bit ARM server chips now sampling, set to launch in late 2014
But they won't appear in SeaMicro Fabric Compute Systems anytime soon
Brit boffins use TARDIS to re-route data flows through time and space
'Traffic Assignment and Retiming Dynamics with Inherent Stability' algo can save ISPs big bucks
Microsoft's Nadella: SQL Server 2014 means we're all about data
Adds new big data tools in quest for 'ambient intelligence'
prev story

Whitepapers

Mobile application security study
Download this report to see the alarming realities regarding the sheer number of applications vulnerable to attack, as well as the most common and easily addressable vulnerability errors.
3 Big data security analytics techniques
Applying these Big Data security analytics techniques can help you make your business safer by detecting attacks early, before significant damage is done.
The benefits of software based PBX
Why you should break free from your proprietary PBX and how to leverage your existing server hardware.
Securing web applications made simple and scalable
In this whitepaper learn how automated security testing can provide a simple and scalable way to protect your web applications.
Combat fraud and increase customer satisfaction
Based on their experience using HP ArcSight Enterprise Security Manager for IT security operations, Finansbank moved to HP ArcSight ESM for fraud management.