Feeds

Measure up your applications for their move to the cloud

Preparing for a new life

The Essential Guide to IT Transformation

Are you ready to get your applications into the private cloud? If you understand the difference between virtualising something and making it part of a broader environment, then you are on your way.

If you have explored the business value of each application to see if it makes sense, you are further still. But now comes the heavy lifting.

How do you get your applications from their current home into your new private cloud?

Count the ways

One of the simplest ways to move an app is simply to forklift the whole thing over to the private cloud infrastructure, knowing that you will sacrifice some functionality in the process.

This makes sense for some software, especially groups of applications that are heavily dependent on each other. It reduces risk and effort because you won’t leave anything on dedicated hardware, and you will require few changes to your binaries.

You could also choose to pick out specific applications within a group, while leaving others in place. Sometimes, groups of apps naturally separate themselves this way.

A group of batch processing apps supporting a web-based enterprise app might be perfectly happy sitting on their own hardware while the web app moves across. On the other hand, you may not get the full elasticity of the cloud after the move.

Different applications need different transformation strategies. Consider the application’s architecture, what it is currently running on, how many other applications it depends upon and how tight those interdependencies are.

Intel Cloud pickets

Bring Cloud on - Yeah, Baby!

Untie the knots

The smart company will document not only its application base but also the process flow that the applications support. Dependency mapping is an important part of this process.

Applications supporting the business workflow are dependent on each other in different ways. They require each other's output and various steps during business processes, and these inputs and outputs should all be understood and recorded.

When mapping dependencies, don’t forget security. Your evaluation of an application should include its security risk and an analysis of what data it stores and where.

Is it acceptable for data accessed by an application to be stored on hardware accessed by other applications? Will migrating an application to a private cloud violate trust relationships between different physical domains?

These parameters play a vital part, not only during the design of the private cloud architecture but also during the scheduling of the migration process, which may take place over several iterations.

Draw some profiles

Applications must be profiled so that the design team can allocate the proper resources during the migration.

When profiling the application, allow enough time to watch for fluctuations in data. You should be able to see whether the application is used more at some times than others and how volatile those changes are.

Explore the application’s latency requirements. How quickly does it expect a response? Can such a response be guaranteed in a cloud computing environment, and does the deployment team have the tools and expertise to ensure that the application meets service level agreements for latency?

This latency issue also determines which applications move together. Applications that rely on low-latency connections with each other should probably be moved together.

Make sure you evaluate an application’s available needs too. This is a particular issue for fault-tolerant legacy applications. Can the cloud migration team ensure that the application will meet service level agreements for guaranteed uptime?

Studying architecture

Analysts will invariably uncover a range of different application types, including multi-tiered, batch processing, high-performance and standalone desktop models.

Multi-tiered software consists of a server back end, a client interface and a layer of application logic in the middle.

These are often designed with a high level of interactivity. They are often compute-intensive and may use component-based messaging via standard interfaces such as SOAP or other XML-based protocols. Often, multi-tiered software is written atop a portable software framework, such as Java or .Net.

Batch processing systems are designed with minimal interactivity. They offer simple batch file input mechanisms and return basic result sets. Expect high throughput levels.

Many batch applications may be legacy based, produced as native executables designed to run on specific hardware.

Standalone desktop applications may be the hardest to transport to the cloud. They are highly interactive, but aside from a few specialist applications such as graphics-intensive software, they are not computationally demanding.

Unlike multi-tiered applications, there is a tight bond between back-end processing and the user interface. They are not designed to be accessed by many users at once, and specialist multi-user software frameworks may be required to make them functional in the cloud.

High-performance computing applications are less common. They are naturally compute-intensive, generally batch-based and tend to focus on relatively few applications. Business analytics applications may edge on high-performance computing.

The design team must compensate by throwing more resource at the problem

When considering high-performance computing and analytics applications for private cloud migration, look at their architecture. They generally fall into one of two camps: scale up or scale out.

Scale-up software depends on a single, monolithic architecture for execution. Scale-out applications uses large volumes of parallel nodes, requiring communication over high-performance buses. Each requires a different approach to migration.

Cloud migration carries particular challenges for some high-performance computing applications that require low-latency communication across networks (hence the high-performance interconnects).

One of the biggest challenges for virtualised infrastructures is network I/O, meaning that the design team must compensate by throwing more resource at the problem.

The prevailing application types in your portfolio determines the cloud models that you choose. For example, multi-tiered applications could be relatively easy to migrate because each tier can be taken separately.

The web services messaging mechanisms used by many such applications make it possible to create loosely coupled architectures to support the software. The data management tier could be virtualised independently of the business logic tier, and may even be scheduled separately as part of a staggered rollout.

Legacy batch processing applications can be among the most challenging to migrate to cloud-based architectures, as they may not support older hardware platforms and operating systems. Nevertheless, there are some available options.

The most revolutionary, but possibly the most effective in the long run, is simply to migrate the old application to a new, open hardware and software architecture. This involves re-writing software and entails the biggest investment.

An alternative is to wrap the hardware in a virtual machine, if one can be found to sit atop a virtualised infrastructure.

Learn to share

Finally, consider leaving legacy software as it is and simply accessing it via a service interface from a cloud-based system. Some battles are best avoided altogether, especially if the risk involved in migrating a particular application is too great.

Organisations also have different options when migrating client-based applications. Virtual desktop integration entails providing each user with a single virtualised PC, hosted on the server. This can be computationally expensive.

Session virtualisation shares a single desktop environment among many users who share access to the desktop and the applications. This can reduce management headaches while making software licensing and capital expenditure on server hardware less expensive.

If done correctly, due diligence helps deployment teams to tackle a challenging process and achieve a migration that results in lower costs, increased agility and a more efficient environment for enterprise applications. ®

Bridging the IT gap between rising business demands and ageing tools

More from The Register

next story
10Gbps over crumbling COPPER: Boffins cram bits down telco wire
XG-FAST tech could finesse fiber connections
Amazon sues former employee who took Google cloud job
Alleges breach of non-compete clause in contract
THE GERMANS ARE CLOUDING: New AWS cloud region spotted
eu-central-1.amazonaws.com, aka, your new Amazon Frankfurt bitbarn
Airbus to send 1,200 TFlops of HPC goodness down the runway
HP scores deal to provide plane-maker with new fleet of data-crunching 'PODs'
Tegile boots Dell array out of chemical biz. Dell responds: Tegile, who?
Upstart says it's up, up and away ... but not on the giants' radar – yet
Dimension Data cloud goes TITSUP down under... after EMC storage fail
Replacement hardware needed as Australian cloud flops for 48-plus hours
IDC busts out new converged systems charts, crowns Oracle as Platform King
Nutanix/Simplivity not shown - but they're there. Oh yes
prev story

Whitepapers

Seven Steps to Software Security
Seven practical steps you can begin to take today to secure your applications and prevent the damages a successful cyber-attack can cause.
Top 8 considerations to enable and simplify mobility
In this whitepaper learn how to successfully add mobile capabilities simply and cost effectively.
Eight steps to building an HP BladeSystem
Building your ideal BladeSystem infrastructure solution begins with eight simple steps, outlined in this whitepaper.
Securing Web Applications Made Simple and Scalable
Learn how automated security testing can provide a simple and scalable way to protect your web applications.
Build a Business Case: Developing Custom Apps
In this whitepaper learn how to maximize the value of custom applications by accelerating and simplifying their development.