Why build a cloud when you can get one ready made?

Microsoft is source and solution of sysadmin Trevor Pott's problems

Boost IT visibility and business value

We small business sysadmins don't get the luxury of doing as we are told. If I built all my networks according to all the whitepapers I am given and used the industry best-practice vendors and products, then none of my customers would be able to afford networks at all.

Not to put too fine a point on it, the simplest industry best-practice enterprise stack – including Cisco Routers/Switches, Microsoft Software, HP Servers and so forth – costs more than the annual revenue of my smaller customers. That is before we add to the mix the financials software they need or the (usually ruinously expensive and maddeningly fragile) industry-specific software.

It is my job to short-circuit these stacks of technology. I design, test and implement customised stacks of technology that end up looking shockingly similar to what some startup will come up with five years later and turn into a well supported commercial off-the-shelf (COTS) package.

From talking to many other sysadmins around the world, it seems this is fairly typical of a certain class of SMB. There are certainly those who have barely moved beyond the hammer and chisel, but there are also those of us who have massive competitive pressure to be more efficient and agile.

The mother of invention

I have been making "spam server" appliances for almost two decades, first as metal boxes and then as virtual appliances. They are simplistic but functional. They accept email for a given list of domains, perform email and spam filtering and then forward that email on to a destination server (usually Microsoft Exchange).

I have never charged for these virtual appliances and thus they have proved to be enormously popular. I have to make a new one on a regular basis to front-end my own mail server and it costs me an hour per customer to copy and install this for them. When I had five customers, this wasn't a problem.

At 25 clients, it is a problem. A new spam server requires about a week's worth of effort. It usually means catching up on a year's worth of evolution in all of the interesting new things that other mail administrators have agreed to do and learning some bizarre new tweak.

Then there is testing to make sure the packages I install work properly, figuring out how to port the grey lists and Bayesian filters, and so on.

As I advance in my career I am finding there is a certain pressure to use that week every year to do something that has a profit margin attached to it. Ten years ago my little spam server provided a competitive advantage in an age when anti-spam and anti-virus software was expensive and fiddly and everyone ran their own servers.

Today, this has been commoditised in the form of well-managed cloud-based email services that are so cheap I would save money by paying for my clients' cloudy email and using that week to do almost anything else.

Primitive man

Similarly, I have been doing what we now call hybrid cloud computing for almost a decade. We didn't really have a fancy name for it back then, but I ran cloudbursting setups on Microsoft Virtual Server (and many others over the years).

I remember working for weeks to get the scripts just right. I would shut down virtual machines on the client site, RAR them into a ball with some config info, FTP them up to my cloud, unrar them, inject them into the virtualisation application (this was pre-hypervisor, remember) and then light them up.

Virtual networking was primitive, at best. I had a script that would check for the existence of a text file to see if this was the first virtual machine active for the client or if there were others.

If the script found this was the first virtual machine for this client it would create the text file, read some config information from the RARball and light up a VPN server for that client. All virtual machines were configured with a minimum of two NICs.

There was a subnet that was identical on all of my client sites and on my cloud location. On it was a file server that contained "site-specific configuration information". Virtual machines were designed to check this file server on this subnet at boot and grab network location-specific information such as network configuration.

This allowed a virtual machine moved from a client site that was internally to be moved to a site that had in a completely automated fashion. There were no fancy site-spanning VLAN Cisco switches involved. VPN servers were not manually put in place before the network moved. DHCP servers could fail, DNS could be completely on the blink and the whole system still just worked.

Of course, Trevor Pott's Cloudy Pre-Cloud Hybrid-Cloud Duct Tape Special had its constraints.

Last resort

That file server absolutely had to exist at the right IP address on every site or everything failed. Each customer site had to be configured with this extra virtual NIC. All virtual machines intended to be mobile had to subscribe to it and they had to be configured to pull configuration information from that file.

It was slow. The design was rigid. The virtual machines in my cloud pulled their authentication information from the active directory servers located on the client site. (I hadn't figured out then how to successfully automate adding a domain controller to my cloud for each customer.)

Most of all, virtualisation on early pre-hypervisor platforms carried a massive performance penalty compared with metal systems and was only to be used when absolutely necessary.

Technology evolved. Virtual Server gave way to VMware Server then to Hyper-V, ESXI and finally to KVM. VMware Server gave me stability and a massive performance increase over Virtual Server.

Hyper-V gave me a "free" hypervisor and near-metal performance. ESXi gave me stability that Hyper-V couldn't and KVM gave me management capabilities I couldn't get for free anywhere else.

There were solid, logical business reasons for moving from each of these platforms to the next, investing the time to change my scripts and templates with each migration.

The essential guide to IT transformation

More from The Register

next story
The Return of BSOD: Does ANYONE trust Microsoft patches?
Sysadmins, you're either fighting fires or seen as incompetents now
Microsoft: Azure isn't ready for biz-critical apps … yet
Microsoft will move its own IT to the cloud to avoid $200m server bill
Oracle reveals 32-core, 10 BEEELLION-transistor SPARC M7
New chip scales to 1024 cores, 8192 threads 64 TB RAM, at speeds over 3.6GHz
Docker kicks KVM's butt in IBM tests
Big Blue finds containers are speedy, but may not have much room to improve
US regulators OK sale of IBM's x86 server biz to Lenovo
Now all that remains is for gov't offices to ban the boxes
Gartner's Special Report: Should you believe the hype?
Enough hot air to carry a balloon to the Moon
Flash could be CHEAPER than SAS DISK? Come off it, NetApp
Stats analysis reckons we'll hit that point in just three years
Dell The Man shrieks: 'We've got a Bitcoin order, we've got a Bitcoin order'
$50k of PowerEdge servers? That'll be 85 coins in digi-dosh
prev story


5 things you didn’t know about cloud backup
IT departments are embracing cloud backup, but there’s a lot you need to know before choosing a service provider. Learn all the critical things you need to know.
Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
Build a business case: developing custom apps
Learn how to maximize the value of custom applications by accelerating and simplifying their development.
Rethinking backup and recovery in the modern data center
Combining intelligence, operational analytics, and automation to enable efficient, data-driven IT organizations using the HP ABR approach.
Next gen security for virtualised datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.