This article is more than 1 year old

Prepare to greet the data centre of 2023

The road starts here

To some, the idea of enterprise data centres still being around in ten years’ time is anathema. By then, they assert, all enterprise IT will be running in public clouds. The only people who can’t see this are insecure box-huggers frightened for their jobs and IT dinosaurs with no imagination.

We, however, are going to assume that in 2023 the world’s IT will not be under the total control of cloud providers such as Amazon, Microsoft, Google and Salesforce.com.

That is not to say we won’t be using more cloud services. We undoubtedly will, but infrastructure on the premises will probably still be IT’s centre of gravity in the majority of medium and large organisations.

So does that mean data centres will remain as they are?

Almost certainly not, if only because natural refresh cycles will bring with them new technologies and ways of doing things, even if you have no plan to change things proactively.

Forward planning

It therefore makes sense to think ahead and make sure that changes are introduced in a coordinated and optimum manner. If you are dubious about the value of doing this, just look back over the last ten years.

Most IT departments have seen x86 server virtualisation enter their worlds, but the ones that planned and managed its adoption in a considered manner are in a much better state than those who just let things happen.

If you took a proactive approach, you are probably enjoying the advantages of a more coherent, manageable and cost-effective environment.

If you adopted server virtualisation in an opportunistic or ad hoc way, there’s a good chance you are battling with virtual server sprawl and networking and storage bottlenecks, while still being a slave to a lot of tedious and error-prone manual administration that others have eradicated.

Looking across the data centre computing world, you will find many other examples of new technology that has had little or no impact, or even a negative impact, because of uncontrolled adoption: unmanaged SharePoint installations leading to document and information sprawl, tactical data warehouse initiatives creating yet more disjoints and integration headaches, ill-planned unified communications implementations running into quality of service issues, and so on.

Get it right, though, and it is possible to move the game forward, with every round of change and investment driving improvements in efficiency, service levels and flexibility and, not least, making life easier for IT managers and professionals.

A forward-looking approach to investments is also more likely to lay a firm foundation for the future. Those who took a structured and managed approach to server virtualisation, for example, now find themselves in good position to start looking at advanced workload management and orchestration (aka private cloud).

It is not just infrastructure and management technology that are evolving; the needs and expectations of users and business stakeholders are too. It may be a bit of a cliché but it really is true that IT departments are generally being asked to deliver more for less each year, and at a faster pace.

It is therefore worth taking a minute to consider some of the specifics here.

Great expectations

Business expectations of data centre computing breaks down into three main areas:

  • Efficient and effective use of resources (assets, people, external services)
  • Quick response to changing needs (new requirements, additional capacity)
  • Delivery of a good user experience (systems performance, availability)

These are summarised on the following figure, which also shows (based on a recent Reg reader study) that IT departments are often perceived to fall short in these areas.

And things will not get any easier. Continuous and ever faster change at a business level comes out strongly when we interview senior business managers, as does the degree to which business processes are becoming ever more dependent on IT thanks to the automation of business operations and direct interaction with customers and trading partners over the internet

Meanwhile, with everyone from management consultants through investment analysts to politicians and the mainstream media talking about cloud computing, the ‘C’ word has now made it into executive vocabulary.

Even though business people often can’t articulate the significance of cloud computing in any accurate or meaningful way, its entry onto the scene has pushed the question of IT sourcing further up the agenda.

Looming cloud

Over the past few years, we have seen an explosion in the number and variety of cloud services available on the market, from basic hosted infrastructure to full-blown business applications.

The promises that typically accompany these services are widely known: no upfront costs, fast access to new capability, less IT infrastructure to worry about, increased flexibility, and so on.

Less well publicised are the challenges that can arise when you start to make more extensive use of cloud. As the number and type of services proliferates, ensuring adequate integration between offerings from different providers and between cloud services and internal systems can become a problem.

Related to this, end-to-end service level assurance often becomes more difficult, as does troubleshooting across systems and services, protecting information, assuring compliance, and, not least, monitoring and controlling costs.

To be clear, most of the problems are not to do with individual cloud services (assuming you do your due diligence on providers); it is more about making sure everything works together safely and cost effectively.

And in this respect, the ease with which end-user departments, workgroups and even individual employees can adopt cloud services, while never thinking about integration, interoperability or information-related requirements, already represents a challenge for some organisations.

Jumbled architecture

The unavoidable conclusion is that business as usual from a data centre perspective is not going to cut it over the coming years.

According to Reg reader research, the typical data centre is a pretty fragmented and disjointed environment. Usually, both infrastructure and toolsets have been acquired off the back of application-related investments.

The stack-based approach to procurement, with each application pulling through a specific set of hardware and platform software with it, has led to the accumulation of multiple architectures over the past two or three decades – even several generations of each architecture in some cases.

Is there one team you can ask to get an accurate picture of what is in your infrastructure?

Just think how many versions of each hardware component, operating system and database management system you have, let alone how the same software and hardware is configured in different instances to support specific application needs.

And is there one place you look or one team you can ask to get an accurate picture of what is in your infrastructure and how it all fits together?

Probably not, and according to feedback from Reg readers a combination of disparate tools and processes, together with demarcation between server, storage, networking and other teams, gets in the way of both efficiency and effectiveness.

If already overstretched operations teams are to keep up with continually escalating and changing demands, it is essential to remove complexity, increase the level of automation and start to manage the data centre in a much more holistic and inclusive manner.

The vision most often put forward is of the data centre becoming a coherent and dynamic virtualised environment, based on an architecture that allows internal and external services to be blended effectively.

Service levels and responsiveness are then achieved via a unified management approach that works coherently across all important domains – servers, networking, storage, applications and cloud services.

One day at a time

Of course such high-level statements are easy to make but hard to act upon. One of the main questions we hear from our research is how do you move towards a more coherent vision or goal, while at the same time keeping up with all the day-to-day activity.

If you are starting with a greenfield site, putting your data centre together based on hybrid cloud and unified management principles, with an integrated multi-disciplinary team looking after it, is the way to go. You can even make sure you have the right blend of cloud services in the mix, delivered by open and reliable service providers that are IT-department-friendly.

For most people, though, it is a case of settling on an overall direction, then moving towards the creation of a more modern, flexible and efficient environment in a stepwise manner.

This generally starts with the formation of a new team made up of some of your best server, networking and storage specialists, then using a discrete application requirement as an opportunity to start laying the right foundations.

The basic idea, which seems to work well for many, is to create a modestly scoped initiative that aims to get everything in place in a joined-up manner before scaling up.

This allows the unified team to get to grips with new technology, tools and techniques, establish a set of processes and figure out how different disciplines will work together effectively. The aim is to avoid trying to run before you can walk.

Once the initial beachhead of future-proof goodness is established, the scope of the new environment can be broadened in a controlled manner. That means prioritising, scoping and phasing subsequent activity to deal with the accumulated problems of the past.

As a next step, some choose to focus on migrating key applications that would benefit from a more responsive and efficient management environment. Others prefer to get stuck into sorting out the long tail of small-footprint applications that clutter many data centres, consolidating as much as possible onto a shared private cloud infrastructure.

Where you start and how you move forward depends on what is important to you – improved service levels and rapid change management for dynamic core systems, for example, versus cost savings and easier administration in relation to the broader Windows and Linux server estates.

Along the way, decisions can be made about which applications and data might be candidates to run via some kind of hosted cloud model, and which systems should be left as they are (for example static legacy applications) or developed along their own evolutionary path (for example the mainframe, your HPC environment, and so on).

A marathon, not a sprint

In terms of timescales, unless you have a data centre that is totally dysfunctional, the evolution we have been talking about is likely to play out over years rather than months.

Over the coming few weeks, we will be drilling down into some of the important aspects of data centre evolution, including key technology developments, emerging best practices and some of the inevitable political issues that IT departments are likely to encounter, if they haven’t already.

On that note, we will leave you with one final thought for now.

We need to stop thinking about the data centre as a facility in which things are housed. It should be seen as a shared corporate resource and notional hub for coordinating the safe, effective use of internal and external services.

This entails a significant cultural shift for many organisations, with ramifications in terms of IT governance and funding, and even the fundamental relationship between the IT department and business stakeholders.

Any long-term plan or vision for the data centre cannot be developed unilaterally. It is as much a business matter as it is an IT one.

  • Dale Vile is research director and CEO of Freeform Dynamics

More about

TIP US OFF

Send us news


Other stories you might like