Feeds

Virtualization and the cloud

Just a stepping stone?

  • alert
  • submit to reddit

Secure remote control for conventional and virtual desktops

Reader Workshop A reasonably fundamental principle of virtualization is that it creates a layer of abstraction between a virtual machine and the physical hardware. As we have already discussed in this series, this allows multiple virtual machines to run on a single physical machine, and also can enable a virtual machine to be moved quite straightforwardly from one physical machine to another.

Within the data centre this has a number of benefits, such as workload balancing (server too busy? 'Simply' move one or two VM's onto a less populated server), higher availability (virtual machines can be moved off a physical server so it can be replaced, upgraded or fixed) and so on.

But hang on – if it's that straightforward to move virtual machines, what's keeping them from moving outside the data centre altogether? One obvious scenario is to move a machine onto hardware run by another company.

Third parties such as CSC, IBM, EDS and Rackspace have run server environments for use by their clients for many years, using a number of names such as ‘hosting’, ‘service provision’ and so on. These companies have been joined more recently by companies such as Amazon, which prefer to label themselves ‘cloud providers’.

Indeed, the older hands at this game have found the lure of the cloud irresistible, and have been launching repackaged cloud services of their own. The current marketing bucket for all such services is ‘Infrastructure as a Service’. Without getting too much into the nuts and bolts of it all, the open question from a virtualization perspective is, if a machine is virtualized and therefore movable, what are the benefits and costs of running it in the cloud?

Any challenges are likely to be around managing the associated risks. There is something about keeping IT in-house, within the firewall where it at least appears better protected and more under control. Taking a workload and giving it to any old Tom, Dick or Harry to manage can be fraught with danger, particularly if the data being processed is sensitive.

With this in mind, it’s still possible to imagine several likely scenarios which possibly boil down to the following factors:

  • How practical it is to move a given workload in the first place (for example, in terms of network bandwidth)
  • How much management and control is required – is the workload something that can ‘just run’?
  • As mentioned, the sensitivity of the data and application involved
  • Legal and compliance issues around geographic location of data

So, number-crunching of non-confidential information (for example in analytics or research) might be a quick win, whereas that business-critical system on which pricing information is changed on a daily basis might need a little more thought before it is shifted off to a data centre goodness-knows-where.

An upside of the cloud is that it creates some possibilities that just didn’t exist before. Smaller companies for example told us how they are now able to create a disaster recovery ‘site’ which replicates their core systems as virtual machines, whereas before (with physical servers) the costs would have been prohibitive.

In companies of all sizes, as with virtualization itself, we are seeing earlier adoption of IaaS in development and test environments. The ability to create one or more sand-box replicas of a live environment, which can be built and deleted as necessary, is highly compelling. Similarly, scientists requiring to run a set of compute-intensive algorithms can now do so, rather than just wishing the possibility were there.

These are still early days, and we are a long way from handing over our IT environments (virtualized or otherwise) to IaaS providers. Or are we? Common sense suggests that we are a long way off wholesale adoption of such an underdeveloped technology or concept as cloud. However, historical examples such as outsourcing teaches us that sometimes organizations can throw common sense out of the window as they try to save a quick buck in the short term. Yes, we have seen it before.

IaaS is not wrong in principle – and indeed, there are plenty of examples of where it may well be able to save organizations a lot of cash at the same time as bring flexibility and higher levels of service into the mix. For example, in the future, organizations struggling with managing their desktop estates (and who may well be looking at desktop virtualization) might indeed be better off handing their desktop management ills to a third party.

But there is still plenty to do before anything other than discrete, low-sensitivity workloads can be run in the cloud, not least in terms of architecture, security/legal and costing models. We'll cover these off in the next article, as well as considering some of the due diligence aspects that can be taken into account during selection and procurement.

Providing a secure and efficient Helpdesk

More from The Register

next story
Business is back, baby! Hasta la VISTA, Win 8... Oh, yeah, Windows 9
Forget touchscreen millennials, Microsoft goes for mouse crowd
SMASH the Bash bug! Apple and Red Hat scramble for patch batches
'Applying multiple security updates is extremely difficult'
Apple: SO sorry for the iOS 8.0.1 UPDATE BUNGLE HORROR
Apple kills 'upgrade'. Hey, Microsoft. You sure you want to be like these guys?
ARM gives Internet of Things a piece of its mind – the Cortex-M7
32-bit core packs some DSP for VIP IoT CPU LOL
Microsoft on the Threshold of a new name for Windows next week
Rebranded OS reportedly set to be flung open by Redmond
Lotus Notes inventor Ozzie invents app to talk to people on your phone
Imagine that. Startup floats with voice collab app for Win iPhone
'Google is NOT the gatekeeper to the web, as some claim'
Plus: 'Pretty sure iOS 8.0.2 will just turn the iPhone into a fax machine'
prev story

Whitepapers

A strategic approach to identity relationship management
ForgeRock commissioned Forrester to evaluate companies’ IAM practices and requirements when it comes to customer-facing scenarios versus employee-facing ones.
Storage capacity and performance optimization at Mizuno USA
Mizuno USA turn to Tegile storage technology to solve both their SAN and backup issues.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
Beginner's guide to SSL certificates
De-mystify the technology involved and give you the information you need to make the best decision when considering your online security options.
Security for virtualized datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.