Feeds

Provisioning - how do you approach it?

Has virtualisation changed expectations unfairly?

  • alert
  • submit to reddit

Secure remote control for conventional and virtual desktops

Workshop Buying new physical servers has always taken time and effort. Unfortunately virtualisation has managed to create the perception that the provisioning of virtual machines is quick, easy and - very unfairly - free of charge. How has this expectation changed the necessary processes when new physical servers have to be acquired?

Ask any IT manager and they will tell you that when it comes to acquiring new physical servers, it takes time to get new systems delivered, never mind getting through the interminable internal sign off procedures required to spend any money in the first place. With the spotlight still targeted at keeping a tight grip on any capital spend, how is it to possible today to specify the physical characteristics of a server in an era when such machines may be called upon to support a wide variety of services over the course of their lifetime?

In days gone by, the process was straightforward, or at least relatively so. You looked at the application to be run, calculated (usually via guesswork) how many users would have to be supported concurrently and spoke with the ISV and did some rough and ready calculations. These defined the processor speed, memory, disk space and I/O characteristics needed, to which the prudent administrator would add a “contingency” factor. Naturally enough, this took time.

Next in line was a large chunk of time and labour to get through the internal procurement and vendor approval processes, the purchase order signed off and the order to the supplier. Finally came the long, long wait for hardware delivery and, perhaps an engineer to do the installation work.

Clearly this methodology is not entirely appropriate when it comes to working out just what configuration of a server is needed to support variable virtualized workloads. Is it possible to try and work out just what is likely to be needed to run and size appropriately? Or is it a better idea to buy the biggest server that fits the available budget and work on the premise that workloads will inevitably grow to fill the beast?

Buying the biggest server possible has much to commend it, assuming that the way IT projects are financed makes it possible for such server acquisitions to be funded. Now if a large machine is purchased it makes sense to make certain beforehand that the physical resources can be managed effectively, especially in these days when the operational costs of systems are coming under more scrutiny.

We know, from your feedback, that a significant number of organisations (but far from the majority) are now approaching application and server deployments with consolidation and virtualisation in mind. Hence service deployment and delivery is now slowly becoming separated from decisions concerning hardware acquisition.

But of course this requires the use of some form of internal cross-charging models and a sufficiently far-sighted and determined IT manager or CIO to make it happen. Of course there are still companies, some of whom should perhaps know better, who still cling to the one application, one server, one budget philosophy and who cannot provision anything much inside of a couple of months.

One area where good management is becoming more important concerns assessing if the tools allow physical resources to be powered down if they are not required to run a workload. Can disks be spun down? Can unused processors be powered down? Perhaps more importantly, are there monitoring tools available on the server that highlight underutilised resources allowing administrators to actively manage the physical resources of large servers? These are challenges that will face more and more IT professionals as more powerful x86 servers are deployed inside computer rooms and data centres, especially as business and external pressures mount to control carbon footprints and electricity bills

Another approach is to buy smaller servers or blades that are capable of hosting moderate workloads without having excess capacity and that can be bought when required, assuming that the supplier still delivers such kit, as resource demands grow. Clearly, if the workload requires larger physical resources than smaller servers can host on their own, some form of resource pooling virtualisation technology will have to be deployed.

There is no doubt that the physical provisioning of servers is becoming more complex as the choices available expand. How are you managing things in a world where the virtualisation vendors have succeeded in building an expectation that workload provisioning is nearly instantaneous? Have you found a good way to keep expectations under control? Please let us know in the comment section below. ®

Providing a secure and efficient Helpdesk

More from The Register

next story
TEEN RAMPAGE: Kids in iPhone 6 'Will it bend' YouTube 'prank'
iPhones bent in Norwich? As if the place wasn't weird enough
George Clooney, WikiLeaks' lawyer wife hand out burner phones to wedding guests
Day 4: 'News'-papers STILL rammed with Clooney nuptials
iPAD-FONDLING fanboi sparks SECURITY ALERT at Sydney airport
Breaches screening rules cos Apple SCREEN ROOLZ, ok?
Crouching tiger, FAST ASLEEP dragon: Smugglers can't shift iPhone 6s
China's grey market reports 'sluggish' sales of Apple mobe
A moment of brilliance? UPnP for Internet of Stuff lightbulbs
Thus doth tech of future illuminate present, etc
Apple's new iPhone 6 vulnerable to last year's TouchID fingerprint hack
But unsophisticated thieves need not attempt this trick
The British Museum plonks digital bricks on world of Minecraft
Institution confirms it's cool with joining the blocky universe
prev story

Whitepapers

Forging a new future with identity relationship management
Learn about ForgeRock's next generation IRM platform and how it is designed to empower CEOS's and enterprises to engage with consumers.
Storage capacity and performance optimization at Mizuno USA
Mizuno USA turn to Tegile storage technology to solve both their SAN and backup issues.
The next step in data security
With recent increased privacy concerns and computers becoming more powerful, the chance of hackers being able to crack smaller-sized RSA keys increases.
Security for virtualized datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.
A strategic approach to identity relationship management
ForgeRock commissioned Forrester to evaluate companies’ IAM practices and requirements when it comes to customer-facing scenarios versus employee-facing ones.