What's the point of data centre orchestration?
Baton twirlers start here
Flipping the script on the 80/20 rule is the idea of data centre orchestration.
It promises to take 80 per cent of the time IT departments spend fire-fighting and reallocate it to service delivery – putting big smiles on board members' faces and increasing job satisfaction in the IT department.
Good luck with that, you might say. But this is what data centre professionals do for a living, although they might not know it.
Think service providers, with whopping great data centres receiving thousands of requests from hundreds of clients, acres of servers, battery rooms as big as your house and enough storage to memorise, well, just about everything.
You may find orchestration being practised in such environments, where staff movement can be high and there are lots of repetitive tasks to perform.
Steam into one of those data centres with a problem and it is handled with clockwork precision and (relatively) few manual interventions.
In a highly orchestrated environment, the data centre team is aware there is a problem only because it wrote the rule book and knows when they happen.
Rules rule in the world of orchestration
In many respects that is what orchestration is all about: rules. It is a system that takes the technology or business logic out of your head and uses it to monitor and make decisions. It requires some significant automation work upfront too, of course, for the rules to apply. But rules rule in the world of orchestration.
Take a call centre as an example: 24-hour operations with kids up and down the country hammering the phones. The business and its customers have a requirement for a certain response time.
So you add a rule to your orchestration system that says, this system does not slow down. And then you work through the conflicts that decision throws up, ticking boxes as appropriate.
All together now
The theory is that when the system starts to grind – or in fact just before it starts to grind – the orchestration service steams through, clearing the way, applying roadblocks, hitting off buttons, provisioning computational wallop, and generally doing whatever you have told it to do to make that call centre deliver to requirements.
Naturally, an orchestration service needs to cover a lot of ground. Data centres are often highly heterogeneous, so it needs to be platform-agnostic and to understand business policies so it can respond to business needs as expressed by service level agreements and objectives.
There’s no point in delivering a service seamlessly if you can’t impress everyone with your performance charts, hopefully heading upwards, at the end of the month.
Orchestration systems need to be able to manage the hardware and software in the data centre and its virtualisation systems, including servers, storage and networks.
It needs to be able to automate the process of creating, decommissioning and moving computational power, virtual machines and storage around the facility – and much, much more besides.
Nice work if you can get it. But there are a few sores on the underbelly of the orchestration theory.
The "orchestra" part of the monicker isn’t there by mistake. IT departments work along functional lines – networks team, storage experts and so on – just like an orchestra with its wind, strings and other sections.
Orchestration tries to cut across that technology-driven structure and deliver what you might call a multi-functional, or system-driven, process that harnesses the power of each section towards a common goal. Team bonding, anyone?
Throw some cultural challenges into this mix, too. Do you like the idea of a system making critical decisions for you? Understandably, probably not. Come to think of it, what is the IT department for if a piece of software is making all the decisions?
We are back to the 80/20 rule, or nirvana. The thinking goes that if you automate and orchestrate everything, the IT department can spend 20 per cent of its time monitoring and fire fighting and the rest on those gleaming new services everyone wants.
The reality is that few if any, are near this goal. More likely they have point solutions which are stepping stones to orchestration. Perhaps the daily backups are automated and given a set of rules to resolve any conflicts. Or maybe some elements of network security are automated.
In other cases you are likely to find what is now being called 'assisted orchestration', which means you still do the groundwork or rules but you hit decision switches and pull levers based on the options that the system presents.
So it runs off and does the grunt work – checking monitors and so on – and you make the decision based on what it tells you. This can be beneficial. Rules is rules.
But what happens when they are broken? What happens when the orchestration system is wanting to command a batch of virtual machines when you know that some of them reside in a cabinet that is going to be assaulted by an engineer that afternoon?
That’s where assisted orchestration and comes into play.
Admittedly, orchestration isn’t exactly top of everyone’s agenda. The software and theory are still evolving, and most people can’t stop fire fighting to realign an entire department to a new practice.
But the benefits seem well defined and there are lots of good candidates for orchestration:
- Undocumented, manual, error-strewn processes
- Where script maintenance has become a real drag
- Complex processes that interact with multiple applications.
You could make an argument for following this list top to bottom, getting the skills and operations down to a fine art at the lower level and building from there. OK, so it may not take you all the way to orchestration heaven but it builds a solid foundation for success in the future.
It will be good practice too, with trends such as cloud services, service assurance and automation sweeping through businesses at present.
And now at least you have a rough idea of where to start. ®
Sponsored: Are DLP and DTP still an issue?