This article is more than 1 year old

DevOps is no excuse for cowboy devs. Right. Let's talk Composable Infrastructure

Trevor Pott blasts through buzzword bingo, reveals cool efficient thing

HPE, or Hewlett Packard Enterprise to you and me, has announced Synergy, rich with new buzzwords for existing concepts and the promise of brand new hardware to woo and wow.

With general availability in Q2 2016, Synergy is taking the lead in HPE's "composable infrastructure" push ... but what lies underneath all the marketing?

The official HPE press release is a tad obscure. I recognise all of the words, and even most of the marketing terms, but I don’t think they quite work when used together.

But things get a lot clearer when I pull out HPE's attempt to rename existing concepts and replace its terms with more standard ones, however.

According to HPE, composable infrastructure follows three "key design principles":

Fluid Resource Pools

  • Compute, storage and fabric networking that can be composed and recomposed to the exact need of the application
  • Boots up ready to deploy workloads
  • Supports all workloads – Physical, Virtual and Containerized

Software Defined Intelligence

  • Self-discovers and self-assembles the infrastructure you need
  • Repeatable, frictionless updates

A Unified API

  • Single line of code to abstract every element of infrastructure
  • 100 per cent infrastructure programmability
  • Bare-metal interface for Infrastructure as a Service

Those bullet points basically say "hyperconverged cluster with decent management tools that supports physical systems, virtualisation and containers. It has an API capable of supporting the 'infrastructure as code' concept".

Others use the term "Software Defined Data Centre" (SDDC), however, the SDDC term has been co-opted to mean so many things at this point that it is even more useless than "cloud".

Alternately, it is reasonable to say that Synergy is HPE's current enterprise-targeted attempt at building an Infrastructure Endgame Machine (IEM).

Details about Synergy remain a little thin on the ground at time of writing, so it is difficult to say how close HPE is to a proper IEM.

Infrastructure as code

When Synergy launches, it could be tied to a specific HPE blade chassis, at least at first. There aren't any specs to discuss at the moment and, oddly enough, that's not actually all that important.

The bit that matters is the concept of infrastructure as code. If you spend enough time with DevOps or Continuous Integration (CI) types, you'll already have heard about infrastructure as code. If you haven't met any DevOps/CI folks, wander around a tech campus and listen to the people who say "agile" a lot and spend their time writing scripts to automate putting their pencils into the right cups in exactly the right order.

That's the heart of infrastructure as code: integration, automation and an obsessive devotion to efficiency – all with the end goal of making businesses agile enough to cope with anything. I know that sounds like buzzword bingo, but stick with me.

Integration is the idea that everything can be reduced to an API. Every piece of infrastructure, from your data centre's temperature and humidity sensors up to storage, networking and virtual machine provisioning, can be addressed and controlled with an API.

Everything is quantifiable. Everything can be monitored. Every error can be trapped, every exception caught, and responses and reactions to every problem defined and scripted and – you guessed it – automated.

DevOps/CI rookies think about infrastructure as code solely as a way to deploy workloads without having to go through the bureaucracy of change management requests. When this is the attitude employed by DevOps evangelists you know that a DevOps attempt is about to fail. Miserably.

DevOps is not an excuse for developers to be cowboys

But when developers and operations teams both have an obsessive attitude regarding efficiency, DevOps really works and ultimately becomes CI. In a DevOps shop that actually works as it should everything is automated. Far more time is spent automating testing, quality assurance, regression testing and error handling than is spent pushing out new code or automating deployments.

The result is a remarkably agile company, one with processes in place to handle a variety of crises, from security problems to physical disasters to massive spikes in demand for resources.

You'll know when your company has finished its transition to a proper DevOps house when you can – and do – unleash the Chaos Monkey on your infrastructure and don't have your infrastructure fall over.

Those who champion Infrastructure as Code assume the pets versus cattle discussion was had a long time ago and that everyone understand why cattle are better. Infrastructure as Code is about moving on from cattle and getting on towards ants.

Yes, this really is the future

While the comments section of this article will no doubt fill up with innumerable snippets about how the old-timers are perfectly happy with their pet servers running on an abacus they build out of rocks and crushed flower stems, companies around the world are working on implementing infrastructure as code today.

Over the past six months I have been researching the infrastructure as code movement and I have had the chance to interview individuals from organisations of all sizes representing hundreds of verticals in dozens of countries.

The short version of this research is that small businesses have no idea what infrastructure as code is, mid-market organisations want to know if it comes as an appliance, while a significant chunk of enterprises and government agencies are gearing up for the major culture change required to implement it.

Meanwhile, service providers are wondering what the fuss is, because if they hadn't gone down this route ages ago Amazon would have driven them all out of business.

There are innumerable startups dedicated to infrastructure as code. Storage startups that realised they won't get traction in a saturated market are pivoting to display their shiny new APIs and convince the DevOps crowd they are the future. Networking companies who want to still exist after the war between VMware and Cisco is over are doing the same thing.

Enterprises have discovered how easy it is to automate and orchestrate complicated workflows in Amazon's AWS, and hardware vendors are terrified of bleeding customers to the public cloud.

Hyperconverged companies fail

Interestingly, hyperconverged vendors have been slow on the uptake here. Collectively they are only now starting to figure out that hyperconvergence is a feature, not a product, and they're all starting to offer up APIs in order to be part of the future too. This last point is important, because hyperconverged companies have, on the whole, not really understood their own strengths and weaknesses.

At its core, hyperconvergence is a way of consuming storage and compute resources in an appliance format. Nothing more, nothing less. What was a novel product seven years ago is a tick-box feature today and it never proved important enough of a concept to get enterprises all hot and bothered in between introduction and commoditisation.

You don't make huge strides in the enterprise merely by offering a slightly simpler way to consume resources enterprises are already consuming. Enterprises are risk averse, change averse and they take forever to add new vendors to the procurement list. This leaves hyperconverged companies with two options to succeed.

Without doing anything particularly new, they could market their wares at the mid-market – which is addicted to IT appliances – with relative ease and a modest price drop. In order to get the attention of enterprises, however, hyperconverged companies need to sweeten the deal.

Very few of the existing vendors have done much beyond create a neat storage platform and add some basic monitoring tools. Full self-service platforms backed by top quality management interfaces including automation and orchestration software are thin on the ground.

This is where the APIs come in. Enterprises are not really interested in investing in infrastructure that isn't future-proofed. They know how slow they are to respond to change and as such they don't want to be taking on new vendors or getting deep into any infrastructure or software that don't look like they will have much of a future. The APIs give these hyperconverged companies a quick and easy way of looking like they care while they scramble to figure out how to differentiate themselves in a crowded market.

Synergy just might be what is says on the tin

It's hard for me to gauge how much HPE understands about the market. On the one hand, HPE has launched the Hyper Converged 250 for Microsoft CPS Standard, which directly addresses the failings of the existing hyperconverged players in addressing the needs of the midmarket for a proper private cloud appliance. Whether through coincidence or design, the company is striking while the iron is hot.

Synergy is aimed at the enterprise. The company talks a lot about the new custom-build and highly configurable blade chassis that will run largely as-yet-unidentified Intel-based x64 servers and drives, interconnected with an as-yet-unidentified interconnect.

Synergy relies on a revamped HPE OneView Composer and HPE Image Streamer and looks to be offering enterprises the ability to build their own template library for physical systems, virtual machines and containers. The end result is something very similar to the Azure Marketplace integrated into the Hyper Converged 250 for Microsoft CPS Standard product.

The announcement of Synergy at HPE Discover London in December 2015 was made at the same time as the Hyper Converged 250 for Microsoft CPS Standard product and it appears both products are being marketed in a similar fashion, only to different target sectors.

This leads to me to believe that the goals of both products are ultimately similar: to make the physical infrastructure of the data center functionally invisible and to enable IT teams to focus on interacting with APIs and automated provisioning tools instead.

Put simply, Synergy looks like HPE's attempt to build an Azure Pack for grown-ups; one that isn't locked into any given hypervisor, containerization platform or operating system. Instead, HPE will get you addicted to its APIs, its management tools and provide you the means to more easily control everything else.

It is exactly what enterprises and DevOps teams have told me they are interested in. It looks like it could be the right move at the right time. After all the missteps and outright failures, maybe HP is back on track after all. ®

More about

TIP US OFF

Send us news


Other stories you might like