SDI wars: WTF is software defined infrastructure?

This time we play for ALL the marbles

Lego Gears of War gun

Sysadmin blog The Software Defined Infrastructure (SDI) war is coming, and it will reshape the information technology landscape like nothing has since the invention of the PC itself.

It consists of sub-wars, each important in their own right, but the game is bigger than any of them.

We have just been through the worst of the storage wars. The networking wars are almost in full swing. The orchestration and automation wars are just beginning and the predictive analytics wars can be seen on the horizon.

Each of these wars would be major events unto themselves. Billions upon billions of dollars will change hands. Empires will rise and startups will fall. Yet despite all of that, each of those wars is a tactical skirmish compared to the strategic – and tactical – war that is only just beginning.

The SDI war is to be the net result of all of the sub-wars listed above, as well as several other smaller ones that are mostly irrelevant. The SDI war is the final commoditisation of servers – and entire datacenters – in one last gasp to counter the ease of use of public cloud computing and the inflated expectations brought about by the proliferation of walled garden smartphone and tablet technology.

What's in an SDI block?

The SDI wars will not focus on storage, networking or compute, but on radically changing the atomic element of computing consumed. Instead of buying "a server" or "an array", loading it with a hypervisor, then backups, monitoring, WAN acceleration and so forth, we will buy an "omni-converged" compute unit. I shall dub this an SDI block until someone comes up with better a marketing buzzword.

When the dust settles, an SDI block will contain – but by no means be limited to – the following key elements:

  1. A server that will provide compute resources (CPU, RAM, GPU, etc).
  2. Distributed storage resources. Fully inline deduplication and compression are no longer optional (think server SANs).
  3. Fully automated and integrated backups – application aware, auto-configuring, auto-testing. This new generation will be as close to "zero-touch" as is possible.
  4. Fully automated and integrated disaster recovery. Application aware, auto-configuring, auto-testing. This new generation will be as close to "zero-touch" as is possible.
  5. Fully integrated hybrid cloud computing, with resources in the public cloud consumed as easily as local. The ability to move between multiple cloud providers, based on cost, data sovereignty requirements or latency/locality needs. The providers who want to win the hybrid cloud portion of the exercise will build in awareness of privacy and security and allow administrators to easily select not only geo-local providers, but those known to have zero foreign legal attack surface, and they will clearly differentiate between them.
  6. WAN optimisation technology.
  7. A hypervisor or hypervisor/container hybrid running on the metal.
  8. Management software to allow us to manage the hardware (via IPMI) and the hypervisor.
  9. Adaptive monitoring software that will detect new applications and operating systems and automatically monitor them properly. This means only alerting systems administrators when something actually needs to be cared about, not flooding their inboxes with so much crap they stop paying attention. Adaptive monitoring will emphatically not require manual configuration.
  10. Predictive analytics software that will determine when resources will exceed capacity, when hardware is likely to fail, or when licensing can no longer be worked around.
  11. Automation and load maximization software that will make sure the hardware and software components are used to their maximum capacity, given the existing hardware and existing licensing bounds.
  12. Orchestration software that will not only spin up groups of applications on demand or as needed, but will provide an "app-store" like (or Docker-like, or public cloud-like) experience for selecting new workloads and getting them up and running on your local infrastructure in just a couple of clicks.
  13. Autobursting, as an adjunct of Orchestration will intelligently decide between hot-adding capacity to legacy workloads (CPU, RAM, etc) or spinning up new instances of modern burstable applications to handle load. It would, of course, then scale them back down when possible.
  14. Hybrid identity services that work across private infrastructure and public cloud spaces. They will not only manage identity but provide complete user experience management solutions that work anywhere.
  15. Complete software defined networking stack, including layer 2 extension between data centres as well as the public and private cloud. This means that spinning up a workload will automatically configure networking, firewalls, intrusion detection, application layer gateways, mirroring, load balancing, content distribution network registration, certificates and so forth.
  16. Chaos creation in the form of randomised automated testing for failure of all non-legacy workloads and infrastructure elements to ensure that the network still meets requirements.

What's the point?

The ultimate goal is that of true stateless provisioning. This would be similar to the "golden master" concept so familiar to those employing Virtual Desktop Infrastructure (VDI) brought to all workloads.

So you want a MySQL database tuned for the SDI block you are running? It will deploy a golden master from the orchestration software pre-configured and pre-tested to run optimally on that hardware. Your data and customizations are separate from the OS and the application itself. When the OS and app are updated, the image will be altered by the vendor; you simply restart the VM and you're good to go.

All monitoring, backups, networking, storage configuration and so forth will simply take care of themselves. Resources will be allocated dynamically based on the hardware available and the constraints placed by systems administrators on what can be sent to which clouds and when.

Unlike the public cloud, this won't be available just to new workloads coded from the ground up. Legacy workloads are here to stay and SDI blocks are all about instrumenting them as fully as possible and enabling them to have as much of the cloud-like simplicity as their aged design allows.

Sponsored: Fast data protection ROI?