This article is more than 1 year old

No time for downtime

Resilience comes from within

Sponsored Servers are always on, except that one time when you need to do some crucial personal banking and you’re informed your bank’s servers are down for “scheduled maintenance.” While the days of scheduled maintenance are going away, they aren’t gone entirely – yet.

We live in the world of Netflix-style levels of service – of no downtime. And yet, Netflix delivers new services without going offline for scheduled maintenance. Behind the scenes, meanwhile, is Chaos Monkey – a rampaging software bot that conducts digital fire drills that test the resilience of the Netflix IT infrastructure. Yet, again, we on the outside don’t notice any of this.

How do you achieve similar levels of never-downtime?

Netflix is a cloud native service, so some might argue the answer lies there but the idea that all enterprise IT is suitable for hosting in the public cloud is a non-starter. The question is how to develop a hybrid computing strategy that matches the requirements of the business?

What’s called for is an architecture and a strategy where each of your workloads are provided as cost effectively as possible, with the criticality of the infrastructure matching the criticality of the application.

The question, therefore, is: how you get the architecture that guarantees such a seamless level of uptime – even if, behind the scenes, applications and services are being launched, updated or retired?

The new way of doing old reliability

Enterprise-scale IT isn’t what it once was: predictable jobs running behind a firewall – for example, supporting your organisations’ core Enterprise Resource Planning (ERP) back end.

As software eats the world, enterprise-scale importance is accorded to so many more aspects of our daily personal and business lives outside that enterprise firewall of old.

It could be a smart-city idea for controlling street lighting and transport. It could be the autonomous vehicle itself or it could be the traffic light control system. It could be the sensors inside a vital piece of the energy distribution network. It could be sensor equipment monitoring hospital equipment or a full building management system. It could be the secure control of any piece of critical national infrastructure. Or it could be a 5G mobile millimetre wave spectrum cell enabling last metre content download and upload of vast amounts of high definition video content.

Downtime is simply not an option

As the demands evolve, so does the topology of the IT infrastructure. Today, we have the growth of edge computing as a complement to centralised systems. Edge sees computer processing taking place in the field, on devices for example, to serve the customer in real-time and reduce latency of delivery.

Illustrating this is a November 2017 HPE signed a strategic deal with Swiss headquartered engineering giant ABB to combine expertise in operations technology and information technology to create the intelligent edge within industrial manufacturing, assembly and processing environments.

Resilience comes from within

So how does one build an IT infrastructure that eliminates downtime for such a demanding world?

Historically, data centres owned and operated by enterprise customers were known as mission-critical facilities because so much engineering design efforts were focused on ensuring maximum uptime. Workloads, numbers of users, demand and growth were stable and predictable. Building this kind of infrastructure, however, was very expensive.

Today, enterprise computing must up its game and accommodate new levels of performance, latency and resilience to provide seamless user experience with workloads hosted as close to the consumer as possible. This isn’t just expensive, it’s increasingly complex to provide and to manage.

This means building an IT infrastructure strategy around a hybrid model that provides a stack built on elements that are not just responsive to demand but whose management tools let you ensure continuous service.

This is potentially complicated by the unpredictable and shifting nature of workloads; companies are therefore seeking simplified, easy-to-manage flexible IT stacks comprised of highly available components.

The technology stacks used to build that reliable enterprise data centre of old were monolithic and comprised of sets of stacked boxes.

That won’t work today. The no-downtime infrastructure is built using software so it’s now a case of choosing the right tools and components and assembling them around the best architecture. What does that infrastructure look like? Those physical hardware foundation components of old have now shifted to become integrated, converged, hyper converged and composable.

Servers scale from micro models, such as the HPE Gen 10 Microserver line that was refreshed in 2017 with new models offering greater performance and security through the Proliant tower servers such as the DL350 server to rack based HPE Proliant BL460 Blade form factors.

In storage it is simplicity, manageability, security, scale and performance that are driving the innovations. It was back in 2010 that HPE bought flash storage manufacturing and software leader 3Par and the brand continues today with HPE 3Par Storserve.

In early 2017 HPE bought Nimble. Nimble was an all-flash array standalone brand that was acquired to complement the 3Par product line and flesh out the infrastructure options available to customers. HPE Synergy is the software defined converged solution that was first announced back in 2015, comprising server, storage and networking technologies integrated into a converged system. Taken with the acquisition of hyperconverged infrastructure supplier Simplivity at the back end of 2016, and a server, storage, converged and hyperconverged stack has taken shape.

This lets customers focus on their own data centre based storage, deploy all flash and converged systems and build hybrid models right up to cloud scale hyperconverged systems that are fully software defined. The software has evolved, too. In HPE’s world, OneView 3.1 now sits at the heart of the efforts to expose storage, server and networking hardware as manageable resource pools and not fixed physical entities. The software heart of everything, however is the combination of microservices and containers – used increasingly to build the new generation of services. The reason is simple: organisations can’t afford to wait for traditional development cycles as they expect to be able to deploy new features at cloud speed in response to customer and user demand.

Microservices are discrete pieces of functionality that added together build a bigger, overall service or piece of software. Microservices are delivered via containers - code-level boxes of functionality that run in a virtualised environment. The container provides a conduit to manage the microservice while running in virtualisation has a number of benefits of scale, performance and cost. You can pack far more microservices into a server using virtualisation than is physically possible if you build the software using a traditional model of writing to the server’s operating system.

Microservices and containers mean a no-down-time IT infrastructure. You can update a feature of your service by simply updating one microservice without needing to take down the entire code base and thereby making the software unavailable. Containerisation lets you slide that new microservice in and out of position and to manage it from deployment to retirement - without the customer noticing.

This is done using DevOps, a culture of tools and work practices, and agile – the concept of building and failing fast, on such a small scale that those on the outside get the benefits of a new feature update or a new service without the annoyance or inconvenience of that scheduled downtime.

No place for 'quaint'

As software eats the world, as more services are written as software, the notion of scheduled downtime for maintenance has become a laughably quaint concept.

And yet, the IT infrastructure of this new world must be delivered to the same standard of reliability found in data center of old but without the cost or the inflexibility. It must allow changes – but changes introduced in a way that are seamless to the user. All that while serving a world where processing, storage and transport of data must flex both up and down in response to demands.

The answer is an infrastructure built in and managed using software. Further, it’s an answer that employs hybrid – something that encompasses robust server, storage, converged and hyperconverged components that can provide platforms to match the changing workload requirements.

By taking a software-defined and hybrid approach to this new world of enterprise IT-class requirements your organisation can keep changing – minus the downtime.

Sponsored by HPE

More about

TIP US OFF

Send us news