This article is more than 1 year old

Don't let your data centre be overtaken

Keep up with future needs

The last decade has seen massive leaps in performance and capability across all areas of IT, enabling organisations of all shapes and sizes to do business in new ways, or do it better and more effectively.

Yet many companies feel that the service delivered by IT often falls short of what is needed and expected. The result is a lack of alignment between the direction and goals of the business and what IT is able to deliver.

Driving better alignment between business and IT is usually at the top of the list of priorities for CIOs and IT directors, even if not formally documented as such.

The main requirements many companies have of their IT departments are to be responsive to requests for new services, for performance and reliability guarantees to be met, for costs to be contained and for more value to be delivered.

Stick to principles

While success is highly dependent on the abilities of the people involved, architectural and technology choices in the data centre can have a real impact.

To meet the requirements of agility, availability and performance, a great deal comes down to effective and automated service provisioning coupled with comprehensive service monitoring and management.

These are principles which underpin the concepts of private cloud, so we would expect those IT departments that are better aligned with business objectives to be leading the way when it comes to the adoption of private cloud. This is indeed the case (figure 1).

Figure 1

It is not just well-aligned IT organisations that grasp the benefits of private cloud. If the constraints of the real world are removed, the concept resonates strongly with many IT managers.

However, when it comes to the practicalities of implementing a private cloud, many companies are still at the stage of thinking about it rather than moving forward (figure 2).

Figure 2

A minority may never be convinced but for most a more dynamic approach to IT is an appealing vision that delivers tangible benefits.

The data centre will increasingly be built on concepts of private cloud, including automation, orchestration, service-level agreements, unified infrastructure and proactive monitoring and management.

This may seem pretty obvious but the way projects and budget focus the attention on the here and now often means that the bigger picture is put off for another day – usually many times over.

Making the case and getting started is often the hardest part, so it can be useful to consider some of the architectural options that can shape your investment in better alignment and more dynamic IT.

Theory of evolution

IT vendors may like to imply that the way to solve many of the problems facing the data centre infrastructure is to sweep out the old and replace it en masse, but the path to private cloud is definitely not through a big transformational change.

For most IT organisations, it is about incremental improvements that help make an initially small, but growing, part of the IT infrastructure more dynamic and cloud-like, while preserving and enhancing what is already in place (figure 3).

Figure 3

What to do about legacy workloads and systems? Leaving them where they are can simplify the introduction of private cloud, allowing you to focus purely on supporting the new workloads.

The result is typically a smoother introduction with less time, cost and associated risk involved in installing the new environment.

Out with the old

This, however, then leaves the older workloads without the benefits of advances in architecture and management that come with cloud-type environments. The result is continuing fragmentation of the IT infrastructure, with workload or service islands that may not function very well together.

In areas such as change management or provisioning, they may also introduce bottlenecks that impede the effectiveness of the modern environment.

The other approach is to gradually migrate the older services into the new environment. The advantage is that these systems can start to benefit from much of the investment in the shared services being integrated into the new infrastructure, with the potential for improved monitoring, management and flexibility. This is by far the most popular approach to modernising IT among the respondents.

On the surface, it makes a lot of sense but if taken too far it can have unintended consequences. The aim is to to move workloads where it can be done quite easily in terms of compatibility and performance, and where the effort involved does not compromise the functionality of the new environment.

Rather than forcing older workloads into the new system at any cost, a pragmatic approach is needed. An assessment period should examine the feasibility of migration, and the decision can be made to either invest in migration so long as it does not interfere with the new environment or to leave the workload as it is.

Spread the virtualisation love

One of the critical underpinnings of the data centre of the future is that the different elements of the infrastructure actively participate in any required reconfiguration due to changing or migrating workloads.

To do this, servers, storage and networking need to be free of their physical or topological constraints so they can be configured on the fly by management policies or automated provisioning tools.

Virtualisation is one of the technology foundations for enabling this flexibility and many organisations have adopted it for servers. More recently, there has been a move to bring the advances in server provisioning and management to storage and networking.

While storage has seen a pickup in adoption, virtualisation of both remains some way behind that of servers (Figure 4).

Figure 4

Bringing networking and storage up to the same level of virtualisation capability as the server estate may not seem to be a pressing priority. Many IT departments still have plenty of server and workload optimisation to do without adding storage and networking into the mix.

But workloads are becoming increasingly virtualised and server consolidation is fast approaching its limits in many companies. To drive further improvements in IT delivery, facilities such as automated workload migration are being more widely adopted.

For workload migration to be successful, both storage and networking need to be flexible enough to reconfigure quickly and easily when change is requested and virtualisation can greatly help with this.

With storage and networking typically having a lifespan of five years or more, thinking of this requirement now and building it into investment plans can help to stave off an inflexible infrastructure in the future.

Fit stacks together

Earlier in this workshop we touched on the concept of integrated stacks. This is where server, storage and networking from a single vendor, or a close partnership of vendors, are pre-integrated and designed to work together as a whole stack.

Many IT architects view this approach as a lock-in, preferring to choose the best vendors for each layer. This scepticism has often been justified but recently vendors have been developing or acquiring the portfolio and capabilities to make the approach more workable.

These “best-of-need” integrated systems usually have good enough overall performance offering a much reduced cost and greater simplicity compared with the optimised performance and higher cost of best-of-breed components that need to be integrated. When it comes to jump starting a move to private cloud, an integrated stack may prove attractive.

Circumstances change and relying on a single stack can be a big gamble

However good an integrated solution may be, though, circumstances change and relying on a single stack can be a big gamble. For this reason, any stack procured should be open enough to integrate with third-party equipment, software and tools without too much effort and be supported by the vendor should the need arise.

A rigid and inflexible solution takes us back to the closed and proprietary systems of the early days of computing.

Fun with building blocks

it is often claimed that everything in the data centre of the future should be simplified and standardised so that there is a single set of building blocks with which to construct all the services the business needs.

However, it may be counter-productive to attempt to force this approach across the board. For starters, workloads can have widely different requirements in terms of performance, reliability, cost and security, to mention just a few.

Trying to run them all from a single set of building blocks is tough to achieve in practice. To make the infrastructure more adaptable to demand, it may be better to have a set of building blocks optimised for some of the most common classes of workload requirements.

This can be done fairly simply by having good, better and best building blocks, with a variety of ever more capable – and expensive – equipment used within a single private cloud to add choice and flexibility.

A good solution may be single-socket blades for virtual desktop workloads or simple server applications; a better platform could be dual-socket blades supporting more memory for consolidated virtual workloads, email servers and entry-level databases; and a best option may be quad-socket servers supporting large amounts of memory for demanding workloads such as online transaction processing systems or business support and analytics.

Cloud islands

Trying to build all the different requirements of a complex multi-vendor infrastructure supporting many business units and customers into a single infrastructure or private cloud stack could prove unworkable.

It may require the development of optimised solutions tuned for certain classes of workload or infrastructure service types, such as high-performance computing, business analytics or transactional databases.

Typically it would involve hardware, software and high-level management, resulting in mostly self-contained private cloud islands.

This is not the pure, clean vision of private cloud that many would like. But it can work well, particularly where there is a fairly clean separation between the various applications or services, such as delivering end-user-facing client application workloads compared with hosting the ERP back end or Exchange server facilities.

Floating onto public cloud

When it comes to actually delivering IT services, the default choice for most IT departments today and for the foreseeable future is to run them from their own private infrastructure.

But there is a growing realisation that building out a data centre to cover every workload and eventuality may be costing a lot of extra time, money and effort. Many companies are beginning to look at public cloud services to augment their own capabilities.

To complicate matters, each provider tends to have its own approach

Software-as-a-service (SaaS) is bought as a complete service and always runs in the public cloud. Integration with internal systems is usually done through high-level APIs, and all the underlying management and maintenance of the SaaS application is performed by the service provider.

Infrastructure-as-a-service (SaaS), on the other hand, is typically a raw mix of server, storage and networking resources that need to be managed. To complicate matters, each provider tends to have its own approach.

This may have been acceptable when only a few workloads – usually test and development – were run in IaaS environments. But the desire to move production workloads in volume from the internal private cloud to the public cloud, and back again, requires compatibility with your internal management environment.

It is possible to architect your internal cloud infrastructure to match a specific supplier’s public-facing environment to achieve consistency, but this limits the choice of equipment and service provider, with a long-term impact on flexibility and service evolution.

A more sustainable approach may be to ensure that the internal architecture is developed in a way that enables your management and orchestration tools to recognise different IaaS providers’ environments.

The tools would handle the translation between your private cloud and the various IaaS offerings, while maintaining and enforcing critical elements such as service-level agreements, security, service monitoring and billing. The IaaS environment becomes just another managed resource pool that delivers services.

Orchestration and automation

The data centre of the future is becoming less dependent on physical systems and individual applications and becoming more of a service delivery hub that pulls together a variety of different services, both internally and externally.

Underpinning this is a shift to thinking about IT as end-to-end services and this means bringing often under-invested and highly fragmented systems and service management capabilities up to speed. It doesn’t matter how advanced the infrastructure or how shiny the servers are if things do not gel together.

If there is one thing that will make today’s data centre futureproof, it is recognising that management, and in particular automation and orchestration, is the glue that holds it all together.

In 10 years’ time the servers and software of today will probably be unrecognisable. But the management policies and service-level agreements may be very familiar to those who put them in place.

  • Andrew Buss is service director at Freeform Dynamics

More about

TIP US OFF

Send us news


Other stories you might like