Feeds

Why build a cloud when you can get one ready made?

Microsoft is source and solution of sysadmin Trevor Pott's problems

Intelligent flash storage arrays

We small business sysadmins don't get the luxury of doing as we are told. If I built all my networks according to all the whitepapers I am given and used the industry best-practice vendors and products, then none of my customers would be able to afford networks at all.

Not to put too fine a point on it, the simplest industry best-practice enterprise stack – including Cisco Routers/Switches, Microsoft Software, HP Servers and so forth – costs more than the annual revenue of my smaller customers. That is before we add to the mix the financials software they need or the (usually ruinously expensive and maddeningly fragile) industry-specific software.

It is my job to short-circuit these stacks of technology. I design, test and implement customised stacks of technology that end up looking shockingly similar to what some startup will come up with five years later and turn into a well supported commercial off-the-shelf (COTS) package.

From talking to many other sysadmins around the world, it seems this is fairly typical of a certain class of SMB. There are certainly those who have barely moved beyond the hammer and chisel, but there are also those of us who have massive competitive pressure to be more efficient and agile.

The mother of invention

I have been making "spam server" appliances for almost two decades, first as metal boxes and then as virtual appliances. They are simplistic but functional. They accept email for a given list of domains, perform email and spam filtering and then forward that email on to a destination server (usually Microsoft Exchange).

I have never charged for these virtual appliances and thus they have proved to be enormously popular. I have to make a new one on a regular basis to front-end my own mail server and it costs me an hour per customer to copy and install this for them. When I had five customers, this wasn't a problem.

At 25 clients, it is a problem. A new spam server requires about a week's worth of effort. It usually means catching up on a year's worth of evolution in all of the interesting new things that other mail administrators have agreed to do and learning some bizarre new tweak.

Then there is testing to make sure the packages I install work properly, figuring out how to port the grey lists and Bayesian filters, and so on.

As I advance in my career I am finding there is a certain pressure to use that week every year to do something that has a profit margin attached to it. Ten years ago my little spam server provided a competitive advantage in an age when anti-spam and anti-virus software was expensive and fiddly and everyone ran their own servers.

Today, this has been commoditised in the form of well-managed cloud-based email services that are so cheap I would save money by paying for my clients' cloudy email and using that week to do almost anything else.

Primitive man

Similarly, I have been doing what we now call hybrid cloud computing for almost a decade. We didn't really have a fancy name for it back then, but I ran cloudbursting setups on Microsoft Virtual Server (and many others over the years).

I remember working for weeks to get the scripts just right. I would shut down virtual machines on the client site, RAR them into a ball with some config info, FTP them up to my cloud, unrar them, inject them into the virtualisation application (this was pre-hypervisor, remember) and then light them up.

Virtual networking was primitive, at best. I had a script that would check for the existence of a text file to see if this was the first virtual machine active for the client or if there were others.

If the script found this was the first virtual machine for this client it would create the text file, read some config information from the RARball and light up a VPN server for that client. All virtual machines were configured with a minimum of two NICs.

There was a subnet that was identical on all of my client sites and on my cloud location. On it was a file server that contained "site-specific configuration information". Virtual machines were designed to check this file server on this subnet at boot and grab network location-specific information such as network configuration.

This allowed a virtual machine moved from a client site that was 10.0.10.0/24 internally to be moved to a site that had 10.0.110.0/24 in a completely automated fashion. There were no fancy site-spanning VLAN Cisco switches involved. VPN servers were not manually put in place before the network moved. DHCP servers could fail, DNS could be completely on the blink and the whole system still just worked.

Of course, Trevor Pott's Cloudy Pre-Cloud Hybrid-Cloud Duct Tape Special had its constraints.

Last resort

That file server absolutely had to exist at the right IP address on every site or everything failed. Each customer site had to be configured with this extra virtual NIC. All virtual machines intended to be mobile had to subscribe to it and they had to be configured to pull configuration information from that file.

It was slow. The design was rigid. The virtual machines in my cloud pulled their authentication information from the active directory servers located on the client site. (I hadn't figured out then how to successfully automate adding a domain controller to my cloud for each customer.)

Most of all, virtualisation on early pre-hypervisor platforms carried a massive performance penalty compared with metal systems and was only to be used when absolutely necessary.

Technology evolved. Virtual Server gave way to VMware Server then to Hyper-V, ESXI and finally to KVM. VMware Server gave me stability and a massive performance increase over Virtual Server.

Hyper-V gave me a "free" hypervisor and near-metal performance. ESXi gave me stability that Hyper-V couldn't and KVM gave me management capabilities I couldn't get for free anywhere else.

There were solid, logical business reasons for moving from each of these platforms to the next, investing the time to change my scripts and templates with each migration.

Choosing a cloud hosting partner with confidence

More from The Register

next story
The cloud that goes puff: Seagate Central home NAS woes
4TB of home storage is great, until you wake up to a dead device
Azure TITSUP caused by INFINITE LOOP
Fat fingered geo-block kept Aussies in the dark
You think the CLOUD's insecure? It's BETTER than UK.GOV's DATA CENTRES
We don't even know where some of them ARE – Maude
Intel offers ingenious piece of 10TB 3D NAND chippery
The race for next generation flash capacity now on
Want to STUFF Facebook with blatant ADVERTISING? Fine! But you must PAY
Pony up or push off, Zuck tells social marketeers
Oi, Europe! Tell US feds to GTFO of our servers, say Microsoft and pals
By writing a really angry letter about how it's harming our cloud business, ta
SAVE ME, NASA system builder, from my DEAD WORKSTATION
Anal-retentive hardware nerd in paws-on workstation crisis
prev story

Whitepapers

Why and how to choose the right cloud vendor
The benefits of cloud-based storage in your processes. Eliminate onsite, disk-based backup and archiving in favor of cloud-based data protection.
A strategic approach to identity relationship management
ForgeRock commissioned Forrester to evaluate companies’ IAM practices and requirements when it comes to customer-facing scenarios versus employee-facing ones.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
Protecting against web application threats using SSL
SSL encryption can protect server‐to‐server communications, client devices, cloud resources, and other endpoints in order to help prevent the risk of data loss and losing customer trust.
Top 5 reasons to deploy VMware with Tegile
Data demand and the rise of virtualization is challenging IT teams to deliver storage performance, scalability and capacity that can keep up, while maximizing efficiency.