Prolong the working life of your cloud applications
Plan for the future
No matter how many layers of abstraction we try to build between the code and the hardware that runs the workload, ALM is still a very real consideration.
That software you are writing has to execute on an operating system. It is going to present an interface to the user somehow, store its data somewhere and probably require third-party libraries and data sources to make it all work.
Migrate a PHP application off an old server onto a brand new one and you will find that short tags have been deprecated and are off by default now. There is nothing wrong with your application, but because you didn't get the memo that short tags were deprecated it won't run unless you tweak a setting.
Cloud computing won't make software dependency problems go away
Migrate from MySQL 5.1 to 5.6 and changes in the optimiser can turn a previously fast application into a slow-motion nightmare, complete with end-user riots, pitchforks and torches.
Cloud computing – or more accurately, the self-service model it stands for – is a machete for a certain type of red tape, but it won't make these sorts of software dependency problems go away. What it does do is change the focus of your ALM.
ALM will become synonymous with change management. The vision of project managers who deal with IT will move from the tactical to the strategic.
Instead of developing with an eye only on the immediate problem, or the budgets and worries or the upcoming quarterly review, design and development will encompass years. What is the useful lifetime of this application? One year? Two years, perhaps, or even 10?
Pick a vendor
Change entails risk. Who do you build on? What underpins your application? That is really the most important question underpinning ALM. Your choices at the beginning of the project determine how it will all play out over time.
Vendor selection is a tricky thing. Many of the applications used by businesses today haven't seen major overhauls in 10 or even 20 years.
If you dedicate yourself to a cloud provider, will it still be there decades from now?
The goliaths probably will be. Amazon, Microsoft and Google are no Nirvanix. They are not going to up and close shop with two weeks notice; if they did, the economies of entire nations would probably implode.
That doesn't prevent a slower, more lingering death. Nor does it prevent one from pulling so far ahead of the others that you are at a significant competitive disadvantage because you chose the lesser weevil.
Amazon's relentless drive to evaporate margins can only work for so long. It is a great tool to drive market share, but you can't achieve growth through market share alone and eventually you have to go back and turn the knobs on your (hopefully) captive audience.
When that day comes, the knob turning will become an addiction – one that many of the businesses that have chosen Amazon won't survive.
Google is currently rudderless. There is a lot of "follow the leader" but very little banner-waving differentiation. Selling spare capacity and monetising in-house developed platforms worked well for Amazon, but what is Google's hook?
If all it has is price – and that's how it seems today – that is worrisome. You can't compete with Amazon on price unless you are heavily subsidising with ad revenues – an issue with a populace recently reawakened to privacy concerns.
There are reservations to be had about Microsoft. With CEO Steve Ballmer on his way out, who takes up the baton? If it is Satya Nadella, vice-president of Microsoft’s cloud and enterprise group, then there is a good chance that all the good done by the server and Azure teams over the next few years will continue for a decade or more.
Microsoft's server folks have the right approach to things. You use the same infrastructure and management tools – even the same development tools – to run your own on-premise infrastructure, talk to a local service provider or deal with Microsoft's Azure public cloud.
You can move a workload from A to B to C and back again with a minimum of fuss. You can even automate it if you like.
The technology is good and Microsoft right now is really the only player that can offer it in an easy-to-use and tested fashion. The differentiator is that it is not just about the underlying infrastructure, but about deep integration between the infrastructure tools, the operating systems that live on that infrastructure and the infrastructure-like applications (SQL, IIS and so on) that run on top of them.
Ideally Microsoft software should offer similar tight integration with other operating systems and infrastructure-level applications. There is still a lot of ground to cover, but the company is getting there.
Unless something radical happens Microsoft looks set to continue its drive to make outsiders first-class citizens on its infrastructure.
Like it or not, ALM's strategic thinking is becoming a basic requirement of development. "Buy exactly the same computers our developers use and set up your whole network just like their test lab" simply won't fly, no more so than "IE6 only" is a viable option for websites today.
The tools available to developers are making development easier; along with this comes the ability to kick recalcitrant devs who are still trying to control the end-user environment.
While it this is a concern to those offering packaged software for consumption by the mass market, it is also a real consideration for in-house developers.
The tools today have changed the landscape so thoroughly that it may well be easier for departments to fund a guerrilla IT version of their own skunkworks project rather than try to cut through the red tape of internal IT.
When that starts happening people get fired. Whether you are Dev or Ops, internal IT or a packaged vendor, that makes it worth paying attention to ALM. ®
Sponsored: Network DDoS protection