This article is more than 1 year old

Penny wise and pound foolish: Server hoarders are energy wasters

How long has this been going on?

Sysadmin blog This summer was particularly bad for western Canada, where I live. Electricity costs are soaring and the datacenter air conditioners were going 24 hours a day. There has to be a way to be more efficient.

My power bill at home is $250 most months. This is with over half the equipment in my lab out on customer sites for testing, the critical corporate stuff running on my cluster and most of what remains turned off to conserve power.

Being me, I still have home network requirements equal to that of a small business. I need to keep about 15TB of storage online at all times, and I need a backup server to be able to back up that storage each night. There's an HTPC around as well as a personal VDI VM.

Somewhere in there is the house domain controller, VMware vSphere Appliance, the remote access VDI VM for external users of the lab, a WiFi access point/router and (at least) two switches. Throw in a desktop, some notebooks, a VoIP phone, tablets and some smartphones and my household IT infrastructure can consume a lot of power.

Every summer the power bill goes up and I start asking if maybe buying some new equipment is worth it in the long run. I'm generally not a fan of summer. Summer is warm and I prefer my environment to be exactly 18 degrees Celsius.

This year, I get the added bonus that the rain stopped falling and the mountains are on fire and the prairies are on fire and I'm on fire and everything's on fire and I'm in hell. On the price front it looks like the race to the bottom for public cloud computing has leveled out, so it's a great time to examine the relative costs of running old kit very new and give the public cloud another look.

Pentium 4s really did suck, didn't they?

Home lab or tier 1 data centre, we all face the same problem of finding what exactly the optimal lifespan of our computer equipment is. Properly cared for, much of this stuff will give us two decades of service. In most cases, however, it's just not worth it.

Consider a former co-worker of mine who told me that he was still running first generation Pentium 4s as his company's servers. Pentium 4s positively chew through electricity. Let's do some maths.

My province is good enough to pass the average rates for our electricity online. Picking the month of April as representative (in an average year, it would be reasonably representative of prices for eight months of the year), the rates for homeowners and small businesses is 5.832 cents/kWh.

At this rate it costs $0.14 per 100W of consumption per day. That's $51.1 a year for every 100W worth of device. Let's round that down to a nice $50 to make the maths easier and go from there.

Now, as my former coworker's systems are outfitted, I know those old Pentium 4s pull about 200W idle. In use they can go as high as 500W, but that's only during peak usage. These systems have average consumption of about 250W per system for a running cost of $150 per system per year.

He has twenty seven of these 1U Pentium 4 pizza boxes running. That's $4050 a year.

The latest, greatest from Intel are the Broadwell Xeons. I've just received my first of these magical devices for testing. While I'm not remotely done all the tests required for a full review I did consolidate all 27 of those servers into a single Broadwell system.

The Broadwell Xeons offer 8 cores and the ability to address 128GB of RAM. They have a peak power usage of 45W and idle at 7W. A 128GB RAM/4TB all-flash system will cost about $3500 and be about the size of your average home NAS.

This sample Broadwell system consumes about 30W idle and 75W fully loaded averaging about 50W if it is a well consolidated virtual server. I await the HP Microserver based on Broadwell with great anticipation.

At 50W average consumption a fully loaded Broadwell Xeon server has a yearly running cost of $25. By consolidating 27 Pentium 4 servers into one Broadwell system I've saved $4000 in electricity a year. Put another way, the electricity savings every year are enough to buy a new Broadwell-based system and then throw it away at the end of the year and still come out ahead.

Being more realistic

Now maybe a 27-to-one consolidation example with arguably the worst x86 chip ever made is unrealistic. (Sadly, if you work with enough SMBs, it's not...but that's another story altogether.) Let's look at my home lab and see what I could consolidate using the Broadwell Xeons.

I have an HTPC that is also running my personal VM that consumes an average of 100W. I have a dual-CPU server that consumes an average of about 300W. Primary NAS is 150W, backup NAS is 200W (deduplication consumes a lot of power, apparently,) and I'll leave the switches and router out of this.

So my home network eats 750W worth of server workloads. That's $375 a year. A Broadwell system then would save me $350 a year in power, requiring about 10 years of service to pay for itself.

Considering the service life of the average x86 PC I work with, that's not an unreasonable lifespan assumption, but it isn't realistic for most businesses. Businesses are still on 3 year or 5 year refreshes. What's more, I would need at least two: the primary and the backup, meaning that I'm not going to justify new toys to my wife on the grounds of power consumption savings alone.

Cooling

At data centre scale there are additional benefits to consolidation. If you can consolidate two servers into one then you free up one server's worth of rack space, networking and so forth. Data centres are always running out of room, so that's a good thing.

Cooling is another consideration. Every Watt of power put into the computer will eventually be emitted as heat. That heat has to go somewhere. Typically it's removed from the room by the air conditioner.

I'm already over my word limit for this article so I'm not going to get into really complicated maths for determining the efficiency of air conditioners so we're going to stick with the basic rule of thumb that each watt used by a computer costs a watt to cool.

Suddenly my 27 Pentium 4s into one Broadwell Xeon savings of $4000 a year becomes $8000 a year and I just paid for a year's worth of my monthly retainer. That 10 years to pay back the cost of a single Broadwell server for the home now pays back the cost of both the primary and the backup units and my wife starts considering it.

Air conditioners have their own costs beyond just the power of running them. They'll have to be maintained - and replaced - every now and again too. I'm going to sidestep that for now, but it's worth considering.

Public cloud costs (Linux)

The average workloads I'm working with here use 4GB of RAM and don't really chew through much CPU. This is enough information to make some broad guesses as to what it would cost to run these in the public cloud.

Given that Microsoft's Azure's pricing seems to have bounced off the bottom, I'll use Azure's costing and assume all workloads are 24/7 legacy workloads that are not cloud-optimised.

Microsoft wants $69.05 per month for a 3.5GB Linux instance or $138.09 for a 7GB Linux instance. Microsoft doesn't offer 4GB instances. This does not include the cost of storage, networking or support.

To make my comparisons I need to look at what full costs would be for running those workloads locally. For a nice round number let's say that I can put 30 4GB workloads into a single server. I'm going to assume that the servers are replaced every 3 years.

At a $3500 acquisition cost the raw hardware of the server is $1166.67 per year and the server costs $50 a year to run (power + cooling). That gives us a monthly cost of $101.38 for the whole server or $3.38 per 4GB per workload per month.

Assuming you can put your 4GB Linux workload in 3.5GB then you need to hope that Microsoft is delivering at least $65.67 per workload per month worth of added value. $134.71 per workload per month if you really need the full 4GB and thus go to the 7GB instance.

Public cloud costs (Windows)

Because Windows Datacenter pricing only comes in 2 CPU packs, attempting to do Windows costs based on a single CPU is non-optimal. (Dear Microsoft, we can now do 128GB of RAM on a single socket. Please adapt your pricing accordingly. Thanks.)

I'm going to multiply the cost of the single socket server by 2.5 (because dual socket systems are always more pricey) and assume double the workload for double the price. Microsoft wants $3600 for a copy of Windows Server Datacenter, making our back-of-napkin 256GB Broadwell-based dual CPU system cost $12350.

At three year replacement cycles this gives our server a yearly hardware cost of $4116.67 I'm going to double the running costs to $100 (power + cooling) per year which gives us a monthly running cost for our hypothetical (all flash, mind you) 256GB RAM dual CPU Broadwell Xeon Windows server of $351.39. That's $5.86 per 4GB Widnows Workload.

Microsoft charges $116.12 per month for 3.5GB Windows instances or $232.23 per month for 7GB Windows instances.

So, if Azure is to be worth it, Microsoft needs to deliver on $110.26 worth of additional value for the 3.5GB instance or $226.37 for the 7GB instance.

It still makes sense to buy new servers

Even assuming that I can cram my 4GB workloads into Azure's 3.5GB instances, and using the Windows calculations, I could buy and operate 19 servers to run one server's worth of workloads and still come in at a lower cost per workload running on my own.

Being a bit more rational about it, if I have a server's worth of work to do – about 60 workloads – then I can easily afford to buy two really quite higher end servers (primary and backup) than I've specced in this article for my data centre, along with racks, UPS, switching and even the cost of IT staff and be cheaper than Azure. And Azure's costs don't look like they're going anywhere any time soon.

And again: that's without having factored in storage, networking or support on Azure. Mind you, it's also assuming a world where servers are pitched after 3 years, something that is happening less and less.

Consolidating servers absolutely can pay for themselves on electricity alone. At 19x the base raw cost per workload, moving to the public cloud doesn't seem to make a whole lot of sense. This leaves us back with the original conundrum of when we should be looking at replacing servers.

27 to 1 consolidation with 10 year old Pentium 4s was pretty simple. Each server ran one workload and gobbled power. The 8 to 2 consolidation of three year-old hardware for my home lab was a stretch. Somewhere in between is the right balance for total cost of ownership (TCO).

Experience tells me that five years is a good server replacement cycle. Torturing maths seems to make it agree. Your thoughts on your replacement cycles in the comments, please. ®

More about

TIP US OFF

Send us news


Other stories you might like