Feeds

Provisioning - how do you approach it?

Has virtualisation changed expectations unfairly?

  • alert
  • submit to reddit

Security for virtualized datacentres

Workshop Buying new physical servers has always taken time and effort. Unfortunately virtualisation has managed to create the perception that the provisioning of virtual machines is quick, easy and - very unfairly - free of charge. How has this expectation changed the necessary processes when new physical servers have to be acquired?

Ask any IT manager and they will tell you that when it comes to acquiring new physical servers, it takes time to get new systems delivered, never mind getting through the interminable internal sign off procedures required to spend any money in the first place. With the spotlight still targeted at keeping a tight grip on any capital spend, how is it to possible today to specify the physical characteristics of a server in an era when such machines may be called upon to support a wide variety of services over the course of their lifetime?

In days gone by, the process was straightforward, or at least relatively so. You looked at the application to be run, calculated (usually via guesswork) how many users would have to be supported concurrently and spoke with the ISV and did some rough and ready calculations. These defined the processor speed, memory, disk space and I/O characteristics needed, to which the prudent administrator would add a “contingency” factor. Naturally enough, this took time.

Next in line was a large chunk of time and labour to get through the internal procurement and vendor approval processes, the purchase order signed off and the order to the supplier. Finally came the long, long wait for hardware delivery and, perhaps an engineer to do the installation work.

Clearly this methodology is not entirely appropriate when it comes to working out just what configuration of a server is needed to support variable virtualized workloads. Is it possible to try and work out just what is likely to be needed to run and size appropriately? Or is it a better idea to buy the biggest server that fits the available budget and work on the premise that workloads will inevitably grow to fill the beast?

Buying the biggest server possible has much to commend it, assuming that the way IT projects are financed makes it possible for such server acquisitions to be funded. Now if a large machine is purchased it makes sense to make certain beforehand that the physical resources can be managed effectively, especially in these days when the operational costs of systems are coming under more scrutiny.

We know, from your feedback, that a significant number of organisations (but far from the majority) are now approaching application and server deployments with consolidation and virtualisation in mind. Hence service deployment and delivery is now slowly becoming separated from decisions concerning hardware acquisition.

But of course this requires the use of some form of internal cross-charging models and a sufficiently far-sighted and determined IT manager or CIO to make it happen. Of course there are still companies, some of whom should perhaps know better, who still cling to the one application, one server, one budget philosophy and who cannot provision anything much inside of a couple of months.

One area where good management is becoming more important concerns assessing if the tools allow physical resources to be powered down if they are not required to run a workload. Can disks be spun down? Can unused processors be powered down? Perhaps more importantly, are there monitoring tools available on the server that highlight underutilised resources allowing administrators to actively manage the physical resources of large servers? These are challenges that will face more and more IT professionals as more powerful x86 servers are deployed inside computer rooms and data centres, especially as business and external pressures mount to control carbon footprints and electricity bills

Another approach is to buy smaller servers or blades that are capable of hosting moderate workloads without having excess capacity and that can be bought when required, assuming that the supplier still delivers such kit, as resource demands grow. Clearly, if the workload requires larger physical resources than smaller servers can host on their own, some form of resource pooling virtualisation technology will have to be deployed.

There is no doubt that the physical provisioning of servers is becoming more complex as the choices available expand. How are you managing things in a world where the virtualisation vendors have succeeded in building an expectation that workload provisioning is nearly instantaneous? Have you found a good way to keep expectations under control? Please let us know in the comment section below. ®

Top 5 reasons to deploy VMware with Tegile

More from The Register

next story
Let it go, Steve: Ballmer bans iPads from his LA Clippers b-ball team
Can you imagine the scene? 'Hey guys, it's your new owner – WTF is that on your desk?'
How the FLAC do I tell MP3s from lossless audio?
Can you hear the difference? Can anyone?
iPAD-FONDLING fanboi sparks SECURITY ALERT at Sydney airport
Breaches screening rules cos Apple SCREEN ROOLZ, ok?
Apple's new iPhone 6 vulnerable to last year's TouchID fingerprint hack
But unsophisticated thieves need not attempt this trick
The British Museum plonks digital bricks on world of Minecraft
Institution confirms it's cool with joining the blocky universe
Turn OFF your phone or WE'LL ALL DI... live? Europe OKs mobes, tabs non-stop on flights
Airlines given green light to allow gate-to-gate jibber-jabber
prev story

Whitepapers

Providing a secure and efficient Helpdesk
A single remote control platform for user support is be key to providing an efficient helpdesk. Retain full control over the way in which screen and keystroke data is transmitted.
Intelligent flash storage arrays
Tegile Intelligent Storage Arrays with IntelliFlash helps IT boost storage utilization and effciency while delivering unmatched storage savings and performance.
Beginner's guide to SSL certificates
De-mystify the technology involved and give you the information you need to make the best decision when considering your online security options.
Security for virtualized datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.
Secure remote control for conventional and virtual desktops
Balancing user privacy and privileged access, in accordance with compliance frameworks and legislation. Evaluating any potential remote control choice.