Feeds

Virtualization payback, now and in the future

Do you really want to join this pool party?

  • alert
  • submit to reddit

Choosing a cloud hosting partner with confidence

Reader Workshop Most people arguably get the point of virtualisation in terms of server consolidation, and the potential reduction in costs and overheads associated with that.

Even though there are some important practicalities to be considered, as highlighted by readers in the first discussion, the game is reasonably well understood, and many seem to be getting on with it.

But does virtualisation have a purpose beyond server consolidation?

In theory, the answer is yes, not least because the fundamental principle of decoupling hardware from software removes many traditional constraints and therefore has the potential to boost both flexibility and responsiveness.

Now you might say we have that already. After all, if we need to provide some horsepower to run a new application for a workgroup or department, it’s no longer necessarily a requirement to go through all the trouble of specifying, procuring and provisioning new hardware. If we have capacity available on an existing server, then we can create and configure a new virtual machine pretty quickly and away we go.

But it should be possible to take things further than this. If we look at the way most virtualisation technologies are deployed today, the allocation of hardware to software is still relatively static, i.e. a specific machine is typically designated to run a specific workload in a given partition. Furthermore, the creation and configuration of virtual machines and the deployment of virtual images is still a manually intensive process.

Of course none of this matters if the nature, level and spread of work across your IT systems doesn’t change that much on an ongoing basis, but the emergence of more dynamic workloads in recent years means this luxurious position is becoming increasingly less common.

More organisations now have public facing Web applications, for example, whether for marketing, sales, support or some other self-service requirement. The load on servers generated by these can fluctuate enormously across a given month, week or even day. Meanwhile, there are quite a few internally facing applications of a more dynamic nature that are increasing in popularity, from broadly deployed business intelligence and analytics, through various forms of collaboration, to full blown unified communications.

Then there are the so called 'situation applications', created on the fly to serve some transient demand, typically by a workgroup or single user, then discarded once their purpose has been served. Such requirements are clearly not new, in that users have been creating 'throw away' and ‘casual’ applications using desktop office tools for years, but with the rise of portals, mashups, social media, etc, they are increasingly expecting such demands to be dealt with online in a sharable manner.

As a result of such trends, the notion of pooling hardware resources and making horsepower available more flexibly on demand, then reclaiming it when the demand disappears or diminishes, has caught many peoples’ imagination. And when we think of enabling technology, we are simply talking about taking virtualisation to the next level. In specific terms, it’s about being able to spin up or close down virtual machines and images very quickly, even automatically, as new workloads appear and disappear, and the processing load in general fluctuates up and down.

Some people use the term 'cloud computing' to refer to such pooling and dynamic provisioning, but without getting into the jargon and marketing speak, we’d be interested in how you see your own requirements in this space developing in the future.

Would this natural evolution of today’s virtualisation solutions be of benefit? If so, where within your business? And how would such capability sit alongside traditional clustering solutions and load balancing offerings that have come out of the Web performance optimisation arena?

Tell us what you think, and throw in any other thoughts you might have on the future of server virtualisation, in the comment area below. ®

Providing a secure and efficient Helpdesk

More from The Register

next story
Preview redux: Microsoft ships new Windows 10 build with 7,000 changes
Latest bleeding-edge bits borrow Action Center from Windows Phone
Google opens Inbox – email for people too thick to handle email
Print this article out and give it to someone tech-y if you get stuck
Microsoft promises Windows 10 will mean two-factor auth for all
Sneak peek at security features Redmond's baking into new OS
UNIX greybeards threaten Debian fork over systemd plan
'Veteran Unix Admins' fear desktop emphasis is betraying open source
Entity Framework goes 'code first' as Microsoft pulls visual design tool
Visual Studio database diagramming's out the window
Google+ goes TITSUP. But WHO knew? How long? Anyone ... Hello ...
Wobbly Gmail, Contacts, Calendar on the other hand ...
DEATH by PowerPoint: Microsoft warns of 0-day attack hidden in slides
Might put out patch in update, might chuck it out sooner
prev story

Whitepapers

Choosing cloud Backup services
Demystify how you can address your data protection needs in your small- to medium-sized business and select the best online backup service to meet your needs.
Forging a new future with identity relationship management
Learn about ForgeRock's next generation IRM platform and how it is designed to empower CEOS's and enterprises to engage with consumers.
Security for virtualized datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.
Reg Reader Research: SaaS based Email and Office Productivity Tools
Read this Reg reader report which provides advice and guidance for SMBs towards the use of SaaS based email and Office productivity tools.
Storage capacity and performance optimization at Mizuno USA
Mizuno USA turn to Tegile storage technology to solve both their SAN and backup issues.