Original URL: https://www.theregister.co.uk/2013/10/04/cloud_alm/
Prolong the working life of your cloud applications
Plan for the future
Application lifecycle management (ALM) is a critical foundational concept. We haven't always called it this – and no doubt there will be new names as marketing requires them – but the idea has been there all along.
In the past few years we have started to formalise processes around this concept, just in time for the cloud to come along and change everything.
ALM is simple to understand and fiendishly complex to implement. The concept is that all applications have a lifespan: they are conceived, created, maintained and then retired.
Maintenance includes both development and operational issues such as patching, feature additions, versioning and scaling.
From one viewpoint, this is the definition of our industry. People don't buy computers just for the sake of it; they buy them because the applications you can run on them serve a definable need.
It is the details that matter. ALM treats many elements as black boxes. This is where the operations or development guys wave their magic wands and something occurs.
With the rapid uptake of cloud computing and the rise of the DevOps movement, the shape of the pretty charts and the responsibility arrows are changing.
Not so long ago, most developers had things pretty easy. They had enormous power to dictate the environments their applications ran in, and they didn’t have that many environments to worry about.
If you were writing end-user applications you had a choice of Windows or Apple – and it was considered acceptable simply to ignore Apple.
Server-side stuff was much more difficult. There were a dozen different varieties of Unix and Linux that mattered, and each did things just differently enough to make installing and running your application a bit of a pig.
Masters of the universe
But in this era of developer supremacy, it was also considered acceptable simply to ignore petty details such as ease of installation, and point the finger of blame at operations. Developers defined the environment the application was to run in, and the job of operations was to make it do so.
This generated situations that proved to be painful whenever upgrades or platform changes were called for. As the internet became a tool of business and leisure the problems could only intensify.
The emergence of the web roughly corresponded with some big changes in platforms. Windows became a force within the IT industry. The main Unix platforms started to consolidate while Linux and BSD took off. Apple entered a dark place and virtually every other platform went deeply niche or evaporated altogether.
The web emerged as an application delivery mechanism. Standards were created, abandoned, extended, abused and ignored. Browser development stalled for years while new security bugs threatened everything on the internet – which turned out to be a lot more than should ever have been allowed near it.
Hardware changes hit the industry like a whirlwind. Application developers saw their user base at the beginning of their application's lifespan drinking the internet through a 14.4kbps dial-up straw; by the end they were connected to it 24/7 through a 1.5Mbps broadband link and an 11Mbps Wi-Fi connection.
Operations departments responded by establishing rigid refresh cycles in the hope of providing some form of stability to developers. Management kept cutting back the budgets and developers kept pushing back the refresh cycle dates.
Stop the world
As we entered the new century, something had to give. Developers couldn't possibly address such rapid diversity, and operations teams were stretched to the limit as they tried to find a balance between securing systems, updating them to meet user demand and keeping badly maintained applications running.
In the early 2000s Microsoft merged its consumer and business Windows lines into a single offering. Hardware changes became less radical and more incremental.
The standards wars of the late 90s were mostly a bitter memory. Even the software used for common tasks was being supplied by at most three key players per category. By now even the most bureaucratic management machine was becoming aware of how vital IT was to the successful functioning of a business.
Unfortunately this came about because the preceding 10 years had seen a shocking string of blunders and calamities which meant that the nerds had to be brought to heel.
It is here that ALM started to enter the lexicon of our industry. Bickering operations and development fiefdoms were no longer tolerated. Finger pointing wasn't solving problems and management frameworks emerged to deal with the rapid pace of change in IT.
Peace and harmony
Now we have cloud services. You can roll your own, rent resources from a local service provider or go with one of the heavy-hitter cloud providers.
Developers don't need to fight with operations to get a new environment spun up to work in. They can fill out a form on a web page and have a virtual machine configured to their specifications in minutes.
The operations team does not need to fight with developers to scale applications. New virtual machines can be spun up as demands increase and spun down when they are not needed.
Patches and operating system changes can be tested easily and well in advance of mainstream deployments. Workloads can be moved from on-premises to service provider to public cloud and back again. A nirvana of chaos avoidance is waiting for all, if only we'd embrace it!
Such, at least, is the marketing hullabaloo. Some of it – in fact a lot of it – is actually pretty accurate. If you want to see a bunch of people who can do amazing things with cloud services go to PuppetConf. These people have this stuff in the bag.
No matter how many layers of abstraction we try to build between the code and the hardware that runs the workload, ALM is still a very real consideration.
That software you are writing has to execute on an operating system. It is going to present an interface to the user somehow, store its data somewhere and probably require third-party libraries and data sources to make it all work.
Migrate a PHP application off an old server onto a brand new one and you will find that short tags have been deprecated and are off by default now. There is nothing wrong with your application, but because you didn't get the memo that short tags were deprecated it won't run unless you tweak a setting.
Cloud computing won't make software dependency problems go away
Migrate from MySQL 5.1 to 5.6 and changes in the optimiser can turn a previously fast application into a slow-motion nightmare, complete with end-user riots, pitchforks and torches.
Cloud computing – or more accurately, the self-service model it stands for – is a machete for a certain type of red tape, but it won't make these sorts of software dependency problems go away. What it does do is change the focus of your ALM.
ALM will become synonymous with change management. The vision of project managers who deal with IT will move from the tactical to the strategic.
Instead of developing with an eye only on the immediate problem, or the budgets and worries or the upcoming quarterly review, design and development will encompass years. What is the useful lifetime of this application? One year? Two years, perhaps, or even 10?
Pick a vendor
Change entails risk. Who do you build on? What underpins your application? That is really the most important question underpinning ALM. Your choices at the beginning of the project determine how it will all play out over time.
Vendor selection is a tricky thing. Many of the applications used by businesses today haven't seen major overhauls in 10 or even 20 years.
If you dedicate yourself to a cloud provider, will it still be there decades from now?
The goliaths probably will be. Amazon, Microsoft and Google are no Nirvanix. They are not going to up and close shop with two weeks notice; if they did, the economies of entire nations would probably implode.
That doesn't prevent a slower, more lingering death. Nor does it prevent one from pulling so far ahead of the others that you are at a significant competitive disadvantage because you chose the lesser weevil.
Amazon's relentless drive to evaporate margins can only work for so long. It is a great tool to drive market share, but you can't achieve growth through market share alone and eventually you have to go back and turn the knobs on your (hopefully) captive audience.
When that day comes, the knob turning will become an addiction – one that many of the businesses that have chosen Amazon won't survive.
Google is currently rudderless. There is a lot of "follow the leader" but very little banner-waving differentiation. Selling spare capacity and monetising in-house developed platforms worked well for Amazon, but what is Google's hook?
If all it has is price – and that's how it seems today – that is worrisome. You can't compete with Amazon on price unless you are heavily subsidising with ad revenues – an issue with a populace recently reawakened to privacy concerns.
There are reservations to be had about Microsoft. With CEO Steve Ballmer on his way out, who takes up the baton? If it is Satya Nadella, vice-president of Microsoft’s cloud and enterprise group, then there is a good chance that all the good done by the server and Azure teams over the next few years will continue for a decade or more.
Microsoft's server folks have the right approach to things. You use the same infrastructure and management tools – even the same development tools – to run your own on-premise infrastructure, talk to a local service provider or deal with Microsoft's Azure public cloud.
You can move a workload from A to B to C and back again with a minimum of fuss. You can even automate it if you like.
The technology is good and Microsoft right now is really the only player that can offer it in an easy-to-use and tested fashion. The differentiator is that it is not just about the underlying infrastructure, but about deep integration between the infrastructure tools, the operating systems that live on that infrastructure and the infrastructure-like applications (SQL, IIS and so on) that run on top of them.
Ideally Microsoft software should offer similar tight integration with other operating systems and infrastructure-level applications. There is still a lot of ground to cover, but the company is getting there.
Unless something radical happens Microsoft looks set to continue its drive to make outsiders first-class citizens on its infrastructure.
Like it or not, ALM's strategic thinking is becoming a basic requirement of development. "Buy exactly the same computers our developers use and set up your whole network just like their test lab" simply won't fly, no more so than "IE6 only" is a viable option for websites today.
The tools available to developers are making development easier; along with this comes the ability to kick recalcitrant devs who are still trying to control the end-user environment.
While it this is a concern to those offering packaged software for consumption by the mass market, it is also a real consideration for in-house developers.
The tools today have changed the landscape so thoroughly that it may well be easier for departments to fund a guerrilla IT version of their own skunkworks project rather than try to cut through the red tape of internal IT.
When that starts happening people get fired. Whether you are Dev or Ops, internal IT or a packaged vendor, that makes it worth paying attention to ALM. ®