The secret is in the planning when you migrate to the cloud
A moving experience
So you have decided that private cloud is the way to go. You have calculated the capacity requirements, done the architectural design, decided whether any equipment and software can be repurposed.
You have ordered and received the additional kit, built the racks and organised the air conditioning, screwed in the servers, storage and networking boxes, wired up the whole lot and switched it all on.
Basically, you are ready to go. But meanwhile, old systems and software still fill much of the data centre.
The next job (which, truth be told, you may be dreading) is to migrate all that legacy capability across to the brave, spanking new world of the private cloud.
Not doing so could put into question the financial viability of the whole programme, which has been justified on the basis of the efficiency savings that can be made. You need a plan of action.
Register readers have been distilling their collective experience of how to migrate applications and services over to the private cloud without making a complete hash of it.
And the answer, it transpires, comes in stages – the first of which is to know where to start.
Count your assets
Have you already catalogued current workloads as part of capacity planning? Now is a good time to review what you think you know.
A migration plan needs to be clear on each workload, its configuration, hardware, software and licensing requirements – so you may need a more detailed review than the one that took place as part of the requirements capture process.
Don’t be at all surprised if new applications come out of the woodwork, particularly under-the-radar apps from the business. When departments go their own way, they sometimes consider IT to be slow to procure and deploy, or find the options proposed too expensive. But nobody really wants to run their own stuff.
People might at first be reluctant to hand control back to IT, but if and when they do you shouldn’t turn them away – far from it. Indeed, this offers a golden opportunity to bring such systems back into IT’s jurisdiction and reduce the potential risks of having an application running, say, under a desk in sales.
Green for go
There are several ways to categorise workloads. For the purposes of private cloud, however, you are interested in how complex and risky applications will be to migrate, according to the following criteria:
- Type of workload: business or research application, web server, database or repository, communications or streaming service
- Workload state: development, test, staging or production, plus a feel for reliability and stability
- Business criticality: number of users, information classes, departments concerned Infrastructure dependencies, architecture requirements, external database connections, shared components and resources
- Resource utilisation: CPU and memory, network I/O and storage volumes at both normal utilisation and peak periods
Armed with this information you should be able to divide workloads into three traffic light groups: green, amber and red.
Green workloads should be simple to migrate, for example those apps that run on a single server with few demands on resources. Amber will require more effort and red will be the workloads that cannot be migrated simply.
Migration will inevitably require downtime, so it is important to create a schedule that fits with business operations (for example, the end of the month or quarter are not generally good times to take out front-office applications).
It makes sense to start with simple, low-risk applications first, not least to test the new infrastructure. They include workloads that require less physical memory or generate little I/O. Once these are up and running, you can move to more complex configurations.
Migration is also a good place to test your provisioning process. In principle you should be creating provisioning rules for each application (for example, in terms of what is running on each virtual server in an application architecture), then spinning up the application automatically. In practice this may be too much overhead for one-off legacy applications.
Finally, note that migration will use up a lot of IT staffing resource, so discourage business departments from putting in urgent requirements and avoid giving the impression that the dynamic provisioning of new systems is available before applications have been migrated.
Deal with edge cases
What about those red light workloads? Suffice to say that the private cloud will not be suitable for everything.
In some cases it may be possible to migrate applications directly, albeit with some effort (for example, recompilation or upgrade). In others you may be able to encapsulate existing functionality into a custom virtual machine, and in others you may just have to leave a system be.
It is the law of diminishing returns. Remember that this process is a good opportunity for rationalising applications, so you shouldn’t rule out the possibility of porting from legacy platforms, migrating data only or even decommissioning.
In all cases though, you need to be clear on the maths, that is, the cost and risk of keeping things as they are versus the cost and risk of tackling a given system.
Look on the bright side
Given that the shift to private cloud may be quite a big change in the way you operate and deliver IT services, you want to get the business on board. So as you think about what applications to migrate, look for areas that are causing difficulty, for example, apps that are running on under-powered hardware.
It might also be worth reviewing requirements that have had to be ruled out for exactly the reasons that private cloud is designed to solve – infrastructure costs, procurement lead times and so on.
Do such needs still exist, and might the period immediately after migration be a good opportunity to address them?
It is early days for private cloud but already it is pretty clear that dynamic IT environments do not manage themselves. They require appropriate controls and this is also true for applications being migrated.
Despite best intentions, things can start to go awry during the migration process. As with any change process, time is never your friend.
There might be unexpected problems to deal with, dependencies on other programmes, late deliveries and so on. The original schedule is kept to as best as possible but tasks are rushed and difficulties occur as a result.
Management process may suffer in such situations: applications are provisioned without their full information being logged; virtual machines are over-configured, with a view to returning to them when things calm down; different versions of applications are tested and spun up without proper records of old versions.
At the end of the process, new requirements are addressed on a shaky foundation
It is so easily done, but you can imagine the result. At the end of the migration process, or as it drags on, new requirements are addressed on a shaky foundation of incomplete records.
This scenario is only a stone’s throw from virtual machine sprawl and a virtualised environment that is anything but dynamic.
A more positive picture is one in which management policies, processes and controls are defined, designed and deployed before any applications are migrated. So although this stage has been left till last in this list, it actually takes place across all other stages.
With a fair wind the end result is the creation of an environment in which hardware resources are far better utilised than before, and which offers a clear picture of resource utilisation.
Managers and operational staff should be able to respond to new requirements while tuning the environment as older requirements fall away – scaling systems down as they fall into disuse, for example. All in all, that’s what private cloud is for.
Perhaps that fabled ideal of running IT as a service could finally be within reach.
We may still be a long way from technological utopia, but one thing’s for sure: the way a migration is undertaken can either take an organisation one step closer or leave it a world away. ®
Sponsored: Hyper-scale data management