Feeds

Put your feet up and make the virtual data centre work for you

Control every layer - without upsetting your service provider

  • alert
  • submit to reddit

Beginner's guide to SSL certificates

Many of the companies I have worked and consulted for over the years have rented server space from service providers.

You are only really likely to do that, though, if you are a relatively small company. Larger organisations are able to buy multi-tiered services so they control not just the configuration of the apps on the servers but every layer in the stack, right down to the storage.

So how does this work and how can you make it work for you?

On the face of it, the customer’s wish to manage every layer of the installation entirely contradicts the service provider’s desire to be able to control how the systems in its setup run.

The trick is for the service provider to provide at every level of the infrastructure, a secure “sandbox” within which customers can do whatever they wish without affecting anything outside it.

The core concept that we will refer to a lot is a virtual data centre. If you are a VMware user you will recognise this as a formal term used by vCloud Director, but I am using it here as a generic concept.

It is rather a good name, after all, as it sums up the concept of providing a customer with a virtual bundle of storage, servers, networking and applications.

Choose your provisions

Once you have a virtual data centre, you will be given an administrator account which has the ability to create other users’ IDs, with different privileges, within your own sandbox. So some users will be able just to open a console session, some to create and remove servers, some to stop and start servers, and so on.

With regard to provisioning the servers, let's start at the bottom with storage. Storage will always be presented via a SAN of some sort, and in the hypervisor layer of the virtual server the provider can define the amount of storage that is available to each customer.

While the most obvious configuration parameter is the amount of disk available, you also generally have the option of different classes of storage. It is common to see an expensive tier of high-speed storage, a modestly priced tier of mainstream storage and a cheap, very low-speed semi-offline tier.

This approach is enormously attractive to the customer because it means there is no need to over-provision servers. It is such a quick job to expand a virtual server disk that costs can be kept to a minimum and you can very easily assign different classes of storage to each application.

The only downside to using flexible storage is that it is not simple to reduce what you use. Most operating systems get very upset if you try to reduce the size of a volume because you can't guarantee that the bit you are throwing away doesn't hold data that you need.

So work around it. If you want to use some space temporarily, just define a new volume in your operating system and drop some storage on it. When you are finished with it, blow away the volume entirely. You can't shrink a volume but you can create and delete them at will.

Power boost

Moving up to memory, you have no such restriction. Again the service provider can provision a RAM quota and you can assign it to your servers as you see fit. If you want to move it around between servers it isn't a problem (though you will probably have to live with a reboot).

Similarly with virtual CPUs, you can generally drop extra power in and then take it out without upsetting the operating system, though as with memory you will have to turn it off and back on again to action the change.

In the networking arena customers want freedom to do their own thing. The problem is that this could mean that one customer wants to use the same IP ranges as half a dozen others that live on the same platform, and eat more network bandwidth than the provider can shake a stick at.

Virtual firewalls are the answer. The provider configures a software-based firewall on the edge of the customer's virtual data centre, and any network traffic between that virtual data centre and anything outside it has to flow through that edge firewall.

If a customer chooses to have more than one virtual data centre within the hosted setup instead of one socking big virtual data centre with all its stuff in it, it will have to permit traffic between the two using firewall rules, or preferably set up a point-to-point VPN tunnel between the two.

That may sound unappealing but remember: if you are connecting into the hosted setup from your office you will want to set up a VPN at the edge of it anyway. It shouldn't be a great feat of technology to take the same approach between virtual data centres.

Standards, who cares?

When you are a customer of a virtualisation provider, you don't have to worry much about standardisation. You have no idea what the physical hardware is, you just know that your operating systems and virtual appliances need to support the virtual hardware presented to them by the hypervisor.

You don't really care about standardisation, then. What you do care about is abstraction – or, more accurately, consistency of abstraction.

Regardless of, say, the physical network cards in the hosts, what matters is that the particular virtual adaptor type configured into your servers is presented consistently by the infrastructure.

Similarly, you don't mind if the hardware has a dozen different CPU types as long as what is presented to you by the virtual hardware is the same across the board.

Remote control for virtualized desktops

Next page: Orchestration

More from The Register

next story
NSA SOURCE CODE LEAK: Information slurp tools to appear online
Now you can run your own intelligence agency
Azure TITSUP caused by INFINITE LOOP
Fat fingered geo-block kept Aussies in the dark
Yahoo! blames! MONSTER! email! OUTAGE! on! CUT! CABLE! bungle!
Weekend woe for BT as telco struggles to restore service
Cloud unicorns are extinct so DiData cloud mess was YOUR fault
Applications need to be built to handle TITSUP incidents
Stop the IoT revolution! We need to figure out packet sizes first
Researchers test 802.15.4 and find we know nuh-think! about large scale sensor network ops
Turnbull should spare us all airline-magazine-grade cloud hype
Box-hugger is not a dirty word, Minister. Box-huggers make the cloud WORK
SanDisk vows: We'll have a 16TB SSD WHOPPER by 2016
Flash WORM has a serious use for archived photos and videos
Astro-boffins start opening universe simulation data
Got a supercomputer? Want to simulate a universe? Here you go
Do you spend ages wasting time because of a bulging rack?
No more cloud-latency tea breaks for you, users! Get a load of THIS
prev story

Whitepapers

Designing and building an open ITOA architecture
Learn about a new IT data taxonomy defined by the four data sources of IT visibility: wire, machine, agent, and synthetic data sets.
A strategic approach to identity relationship management
ForgeRock commissioned Forrester to evaluate companies’ IAM practices and requirements when it comes to customer-facing scenarios versus employee-facing ones.
5 critical considerations for enterprise cloud backup
Key considerations when evaluating cloud backup solutions to ensure adequate protection security and availability of enterprise data.
Reg Reader Research: SaaS based Email and Office Productivity Tools
Read this Reg reader report which provides advice and guidance for SMBs towards the use of SaaS based email and Office productivity tools.
Protecting users from Firesheep and other Sidejacking attacks with SSL
Discussing the vulnerabilities inherent in Wi-Fi networks, and how using TLS/SSL for your entire site will assure security.