We're going for optimised workload delivery...
Are we nearly there yet?
Workshop Who wouldn’t want IT to be delivered in a more dynamic, flexible, agile, [insert your least detested buzzword here] way? Optimal configurations for server and desktop workload delivery have been discussed, and indeed, attempted, for many years. So how close are we to really nailing this?
The answer to this question may depend on what your view of ‘optimal’ is, and this will surely depend on the workloads in question. Availability and stability may be the key requirements for a software application, whereas throughput may be most important factor for a transaction-processing engine. But application workloads are also dependent on the kit you have to run them on, and the management tools at your disposal. There are other factors as well, not least of which being users’ expectations.
So the first question becomes, how to establish a baseline picture which draws together the expectations from the business on the one hand, and the requirements and dependencies of the underlying infrastructure on the other? This can be tough, as we know from numerous research studies. The ‘fun’ doesn’t end there however, as the gaps between user expectations and the actual environment result in questions about exactly what needs to be resolved, and how.
Much of this boils down to deciding which elements of service delivery matter the most. Response times are the ‘old faithful’ of SLAs, but they mask criteria such as availability, scalability and accessibility wherever a user happens to be. These are important factors, of course – but from an infrastructure perspective they are as much symptoms of a well-engineered server environment, managed properly, as anything.
Considering the former, we know that organisations are often prepared to take a gamble with the resilience features they build into the infrastructure. And as for the latter, i.e. good management, we have learned from a number of studies that while things are not broken, they could always be improved.
However, could the question of ‘optimisation’ be fundamentally unanswerable due to it being a moving target? Future demands for scalability and performance, for example, will ask new questions when it comes to delivering current workloads. All you might care about today is keeping pace with the demands of the business. This, of course then begs the question, just how rapidly do the demands for service vary in your organisation?
If things are static, then keeping things running without too many user complaints may be all you need to do in terms of optimising your infrastructure, at least until someone brings in some new challenge from on high. Perhaps a desire to save money is driving you to review your infrastructure at the moment, for example.
If, on the other hand, the demands of your users vary frequently, are you indeed seeking the better way when it comes to optimising how your IT is managed and delivered? A number of options purport to help in this area. For example, are you actively seeking to make use of virtualisation technologies to enable you to manage service delivery as dynamically as the business requires? Maybe you are even considering using cloud-based IT resources to make up for any short term in-house constraints?
From a practical point of view, does your IT shop lack the wherewithal to make ‘optimisation’ an ongoing thing, or are you fully tooled up and ready for dynamic IT service delivery? Come to that, do you or your business users either want or need dynamically managed IT services? If you believe we’re on the brink of what could really be called optimal IT delivery, or indeed if you think all such ideas are just pie in the sky, we’d be interested to hear. ®
We have never had it so good...
Along with many others we are working on retrofitting all the good bits about Mainframe back into our AIX and Windows estate and have been seem to have a much better chance at delivering "Optimised Workload Delivery".
Virtualisation of systems from a cpu and memory perspective have allowed us to drastically reduce the time it takes to commision a system (no extra cabling for SAN and Network for instance). Virtualisation has also made it possible to increase the hardware utilisation as there are now no constraints from a hardware card perspective i.e. running out of network cards/internal scsi disks for root disks etc. This has enabled us to push up CPU utilisation and drop the cost of each system form both a manpower perspective and a hardware spend perspective.
Advancements on the Power6 hardware have also allowed us to segregate partitions from a licensing perspective and allowed us to over-commit cpu within a sub-pool meaning that although the frame may have 64Cpus we can run Oracle partiitons within a sub-pool of 10 CPUs and cram on as many lpars as we can into it and only pay for 10 Oracle licenses. Again, this brings flexibility and cost savings which can be passed to the customer.
The dynamic operations side of thing allows us to respond to workload peaks without requiring an outage, this helps us meet SLAs keeps the customer happy and as long as the software licenses are felxible enough we can save money and remain legal!
However, to do this properly the customer must be charged for a Service and not (as it seems has been the case for many years) for x number of CPUs and x amount of memory. To fully exploit new feeatures for optimised workload delivery the whole enterprise and way of working both from an IT perspective and from a customer/business perspective has to change. To achieve fexlibility the customer has to let go of the comfort blacket of knowing that he has bought 4 CPUs and 4 he will get (even if for 90% of the time they only run 30% busy).
So as with mainframes, one of the headaches around this flexibility of using shared resources is who pays for what, how do you measure it and how is it charged back and accounted for at the end of the day.
System management also needs to increase its scope and focus on the CEC performance and utilisation rather than concentrating on the individual LPAR. Keeping on eye on this enables you to confidently over-commit CPU without flattening the box.
The other major handbrake to fully exploiting these "new" features is software licensing. Although the big players seem to be more on the ball regarding how customers wish to use the new hardware there are still many smaller vendors who insist on certain rigid contractual obstacles to fully making use of this flexibility. Getting these smaller vendors onside can be a time consuming process, however I believe that over time nearly all will have to go with the tide.
Anyway, I definately believe that as these features are taken up more readily IT certainly has its best chance so far of optimising workload and service provision...if the bean counters/change management and contract management allows!