Of opposable thumbs and software engineering
Where will evolution take us
It’s been almost 200 years since Charles Babbage first started work on his difference engine, and programmable computation is fast approaching 100 years old.
Over this time there has been a lot of change in software development and in this article we look at the evolutionary pressure that has shaped that.
Can we predict the next big thing in software development? Will nascent concurrency paradigms be the subject of the next ambush from a passing boss, or will the talk by the water cooler centre more on how to bridge that ever elusive communication gap with the customer? Read on or risk being unprepared!
If we go back to when the first computers were being programmed, there were with punch cards and state machines and the development tools were at a pretty basic level. At the time an integrated development environment was a room containing both the computer and the ticker tape. In the classier shops there was even a lockable cupboard that served as a source repository and for the very lucky, a fly swat that could be used for debugging.
Later, as digital computers replaced mechanical, computer languages, as we recognise them today, began to appear and the hole punches were returned to their rightful owners. A plethora of languages emerged during the following decades, addressing many different purposes in many different paradigms. One in particular holds a special place in the heart of computer science and for many of us it was our first experience in computer programming.
While never in danger of being adopted by right-thinking development departments, Basic is important in any discussion about the evolution of software development. Basic programs are after all noticeably primal, uncorrupted by notions of modularity or type safety. The language is diametrically opposed to intelligent design and exhibits many of the problems that the prevailing paradigm of today, Object Orientation, was specifically conceived to avoid.
Since these heady days there have of course been many changes in the tools available to the software developer. As the industry entered the Gold Rush era, the tool vendors got serious about providing ever improving picks and shovels.
Care in the Community
These days it’s bigger business than ever and we’re seeing unprecedented investment in compilers, development environments, platforms and testing software. How do the tool vendors choose in which direction to take the technology I wonder? Are they guided by what their own developers and research feel would be a good idea, are the decisions made on more commercial than technical grounds, or is there some other process at play?
The management of the Java language and platform by Sun is a marvellous example of developer-driven evolution. Since 1998 the company has engaged with the community through the Java Community Process and earlier this year we saw the release of open JDK. Developers can now perform research or try alternative implementations of language features by downloading the platform source and changing the code.
An older standardisation effort that is perhaps more formal, but certainly implicating the community and evolving to meet changing developer needs is that of C++.
C++, for all its expressive power, the natural syntax, its cross-paradigm support, the interoperability with C, the two phase template dependent name lookup ... OK, I’ll stop, my apologies - but I’m a fan. For all that stuff its virtual machine-targeting friends have an important competitive advantage. A native language has to wait for reasonable cross platform support before it can incorporate new features or support for new paradigms. The abstraction proposed by a virtual machine however is not bound by this restriction. This capacity to evolve quickly to support new paradigms and frameworks in its targeting languages is extremely significant.
For example, exploiting concurrency is the next big challenge in software development. We’ve all seen that clock speeds on processors no longer double every two years, and yet the complexity of software continues to increase (I’m not looking at the latest version of any family of operating systems in particular). The only solution to increasing complexity without increasing instruction speed is to use the CPU more efficiently. This means that that second core on that multi-core CPU is going to have to start pulling its weight, and that means application developers have to start writing multi-threaded code.
However, writing multithread code is difficult even with the concurrency support in modern languages. The problem with today’s synchronisation paradigm is that locks proliferate and become global resources before eventually a need to prevent deadlock by inter module coordination emerges.
This is complicated and a single error is hard to find and can be enough to render the system broken. However, there is light at the end of the tunnel. Researchers at Microsoft and elsewhere are finding new ways to handle the challenges of concurrent programming. Transactional memory looks particularly promising and could potentially be implemented as a feature in a virtual machine, enabling targeting language to offer the new concurrency paradigm to developers. This and other aspects of the concurrency problem will be looked at in an upcoming article.
There’s evolution in other areas too, we’re also seeing the resurgence of domain specific languages. Whenever there’s been explosive growth in a technology in the past, such as object oriented programming, it has always been preceded by long drawn-out proof of concepts.
Domain specific languages certainly fit that profile with big success stories in applications such as parsing and with the inclusion of a DSL workbench in Visual Studio 2008. Could they bridge the communication gap between domain exports and software developers? Keep tuned to Register Developer for analysis.
Users aren't losers
If we go briefly back to the earlier discussion about evolutionary drivers; you may be surprised to hear there is a another agenda that we haven’t yet talked about. Recently I spoke to an architect who had selected .NET for a new project despite the prevailing tide in his organisation for Java and C++.
When I asked what the reasons were for this technology choice I expected an answer like the development environment was better integrated in the Microsoft platform, or there was some framework that could be leveraged to reduce the estimate for the project.
However, these were not the motivating factors: the choice was actually made because it was a desktop application and it was felt that a better user experience could be offered with .NET and Windows Presentation Framework. End user experience is becoming more and more important. Could the day arrive where it could outweigh a technical factor? Sun don’t seem to want to take the risk, it's also working on the UI side of Java with a very impressive Java FX toolkit in the pipeline.
There’s a famous Dijkstra quotation, "The tools we use have a profound influence on our thinking". In other words, when the only available tool is a hammer, all the problems start to look like nails. As developers we must make the distinction between the solution we’d like to offer and the solution we’re able to implement with available technology, when there’s a difference we apply selection pressure to the tool vendors and make the evolutionary process work in our interests! ®