This article is more than 1 year old

The wonderful madness of metrics: Different things to different folk

Or, how I learned to stop worrying and verify

Managers and customers love statistics and metrics. Companies can live or die by how good their metrics are and the potential penalties for failing to meet the required service levels as defined in agreements.

It can also be: “Have my team met their SLA” or: “What is the uptime on the server farm”.

The dictionary defines the noun metric as: “A standard for measuring or evaluating something, especially one that uses figures or statistics.”

In an honest world, things would be that simple. But this isn’t an honest world. Metrics can be interpreted differently, stretched, used and abused if they are not laid out in black and white. It can be like comparing apples and oranges, to use an often quoted phrase.

It goes without saying that any technology vendor has an interest in metrics. Metrics provide a way of measuring performance — remember that!

Obviously the better the score for the product being tested, the better the vendor in question looks to its customers. To be kind we could say that metrics (essentially test methodologies) are sometimes open to interpretation and vendor interpretation of the methodology used for a supposedly standard test.

As positive metrics mean good money it should come as no surprise that some vendors have been caught tweaking metrics, software or hardware to make their product look good. Many vendors make claims using “optimised” metrics that are skewed to represent their product in a good light.

Currently there are a lot of examples in the smart device arena where manufacturers have actually come clean about being caught. Some vendors had a special “secret mode” that when a test platform was detected, it went all out to get good scores, heat and battery usage be damned.

Yet another frequent example is storage vendors claiming massive IOPS (I/O Operations Per Second) from their array. Yes, a certain array may well perform that many IOPS but only under very controlled conditions and specific configurations.

Not all IOPs are created equal. Large sequential reads require fewer IOPs than a random write operation.

Real-world IOPS are a whole lot less as it isn’t as non random as the test the manufacturer performed. Also the machine evaluated won’t be the entry level unit for sure! As a side note on the storage side, things have improved in the storage testing methodology arena but there is definitely room for improvement.

Another form of metrics that often causes a lot of contention is around website uptimes and monitoring. A lot of websites an X-as-a-Service advertise an uptime of 99.999 per cent. To those who can’t be bothered doing the maths, five nines equates to 5.26 minutes per year or 25.9 seconds of permissible downtime per month.

That in itself is a tall order for even the biggest of vendors/data centres with bags of cash and staff.

If that’s the case, how are all these smaller vendors claiming to be anywhere near five nines. The truth is, the devil is in the detail (otherwise known as the small print). Frequently contracts that get signed for leased network lines, cloud, websites and such, have clauses that will commonly include the following exclusions:

  1. Act of God — so any hurricanes or a plane flying into your data center or such doesn’t count
  2. Issues beyond vendor control — such as the network lines get torched, as happened recently in central London
  3. Patching and updates for security requirements
  4. Planned downtime

More about

TIP US OFF

Send us news


Other stories you might like