Standards and interoperability: Are you backing the right horse?
Good and bad, costs and risks
Lab IT can sometimes seem like a long, drawn-out process of making things work with each other. Whether it’s getting back-end systems to exchange information, or trying to open a file that has been sent in an unexpected format, most who work with technology will be familiar with the challenge. But surely standards are supposed to help here, right?
Nobody could doubt their importance: certain facets of computing have settled on particular protocols and formats for example, without which we would not have the Internet or, by extrapolation, the Web. For every standard that succeeds however, many fall by the wayside, and there doesn’t seem to be any link between the amount of effort put into standards development and the likelihood of success.
Backing the wrong standard can be fraught with hazard for both technology vendors and end-users. We might as well get the VHS vs Betamax analogy out of the way; in IT we have equivalents in such areas as local area networking (Ethernet vs token ring) and messaging systems (SMTP vs X.400). It would be good if we could just use what was already standardised, but the trouble with standards is that they never seem to keep up with what either the industry or end-user organisations are wanting to do.
Some just have to happen before adoption – 10 Gigabit Ethernet for example, which brings together storage networking and server networking protocols, and which has to be agreed across manufacturers to make any sense at all. But think of virtualisation right now: wouldn’t it be preferable to have a single standard for an x86 virtual machine? Perhaps, but we don’t, and no organisation is going to wait around for ISO, ECMA or whichever august body might be chosen, to decide on which virtual machine format is to be ‘chosen’.
What should be at the centre of any such decision making is not always whether a standard is completely ‘open’. As well as all the examples of internationally agreed standards, organisations are quite capable and content to accept a proprietary standard (now there’s an apparent oxymoron) if it suits their needs. Java became de facto long before it was endorsed by ISO for example, as did the Portable Document Format (PDF). We can argue the rights and wrongs of whether vendors should be involved in standards negotiations until the cows come home, but ultimately, de facto standards are decided by the market.
Against this background, what becomes most important is the desired level of interoperability between the resulting systems an organisation uses. ‘Interoperability’ is a nebulous term so we won’t try to define it here. Rather, let’s consider the differences between good and poor interoperability, in terms of benefits, costs and risks.
Good interoperability translates into flexibility, in terms of existing and new systems. Requirements change, and IT needs to follow suit – but all too often we find ourselves locked into existing systems, applications or interfaces because they don’t let us do what we want to do. With flexibility comes choice – that is, we have more options as we look to take our IT environments forward. Both can have a financial impact, either in terms of reducing the cost of IT products, or the time we spend writing and maintaining interfaces and building work-arounds.
Inefficiency is the result of poor interoperability, as costs of keeping things up and running, or indeed, adapting systems to meet new requirements, can quickly escalate should the necessary interfaces or standards be absent. It’s worth pointing out that interoperability can easily be broken with a few bad decisions at the design stage.
For example, I remember working with a company that had problems with a certain open source application server, due to a contractor taking it upon himself to extend the code in order to add a few custom enhancements. You can imagine the wasted effort that went into retrofitting the ‘enhancements’ into subsequent releases, particularly once he no longer worked for the organisation.
Interoperability is desirable but it is not an absolute, just as it is not possible to design a single device that can meet every need. There will always be a place for proprietary solutions, particularly if they work extremely well, and the value derived from them is significantly greater than any ‘interoperable’ equivalent. Equally, commercial vendors answerable to their shareholders (that’s just about all of them) will always be looking after number one – interoperability and standards are a means to a largely financial end.
On this note, it is important for end-user organisations to be discerning, not to mention a little suspicious when it comes to matters of interoperability. Vendors across the board will claim their own products work ‘better together’ (with the implication that running a multi-vendor, best-of-breed environment will never be as good) but you should keep an eye on both the short-and long-term costs of any lock-in this implies. This is the case from traditional application stacks (“our app works better on our database”), and newer online models, many of which still pay scant regard to matters of data accessibility or portability.
Due diligence is key when buying and deploying IT systems and services, in terms of both what you need a system to do now, and what you might need it to do in the future. A few questions asked early on around interoperability can go a long way; otherwise, by the time you find you have backed the wrong horse, it may be too late to do much about it. ®
Over a third of a century of un*x ...
... and so far, I see no interoperability issues, from the IBM 3151 dumb terminal to the small cluster of vaxen to the 3 year old Sun to the laptop with Slackware 13.1 Beta 1 ... Interoperability issues are always caused by (mis)management of resources, usually driven by marketing forces.
Only part of the story.
"Due diligence is key when buying and deploying IT systems and services, in terms of both what you need a system to do now, and what you might need it to do in the future. A few questions asked early on around interoperability can go a long way; otherwise, by the time you find you have backed the wrong horse, it may be too late to do much about it."
This is really only part of the story. There are many facets to ensuring compatibility between products. The first part is indeed “due diligence,” but what exactly constitutes due diligence? Vendor promises? Here’s a shocker for you: salesmen LIE.
No, despite what many people in “the industry” would tell you, and certainly despite what Intel will tell you (over and over and over again,) one of the worst things in the world you can do is buy new technology. Now, I don’t mean “buying new computers with warrantee, etc.” is a bad thing. I mean buying version 1 of anything is completely ****ing retarded. You don’t buy Vista, you wait for Windows 7. You don’t build a smartphone on Moorestown, you wait for Medfield. Etc.
Let’s stick with the smartphone analogy for a second, because it gives us a good opportunity to look at an up-and-coming technology trying to break into an extant market. Right now, if a dozen phones came out with Moorestown/Meego, I wouldn’t even bother testing a single one of them. I would stick with RIM, or consider Android on ARM because it’s proven. One refresh cycle later, (three years on,) MeeGo 2.0 on Medfield would be out, both the hardware and the software having had a generation of early adopters walk face first into the current landmine of lies, damned lies and statistics for me. I would have a reasonable idea of how Android on ARM stacked up against RIM against MeeGo on x86, and what sort of patent catfights or standards lock-in **** swinging was on the horizon. (El Reg and Ars serve their purpose by keeping me informed of such things.)
Moving that over to the latest and greatest server hibbery-jibbery: let us say that Intel comes out with the super-deluxe 16-core HAHAHA processor with added IOMMU pi.5 and some awkward decision to do something strange like migrate VPro directly into the processor. Fantastic; it’s a new processor requiring a whole new type of motherboard with eleventeen squillion pins and is fundamentally incompatible with AMD’s approach to the exact same thing. This means that in order to even begin to compare one to the other I have to wait for VMWare to get samples, code for both manufacturers, run a couple generations (to deal with patching bugs, etc.) before I would have a real idea of what benefits (if any) this “new hotness” can bring. Not only that, there are questions that might arise: would the inevitable incompatibilities and attempted lock-ins prevent me from migrating my VMs across architectures? Would it be fully backwards compatible? So many questions.
The truth is I don’t trust vendors. Not a single one of them. They all play their games, and mouth “openness” out one side of their mouth while telling how their lock-in is the greatest thing since the LED out the other. Maybe in the world of high-performance computing the need to squeeze in a few more gflops/sq. ft matters so much that interoperability, reliability and avoiding technological dead ends are just not relevant concerns. Maybe some places can replace all their gear all at once every four years. For the rest of the world though, they deal with the realities of aging systems that absolutely must talk to eachother, and can’t easily be replaced. (For example, I maintain several very large and expensive digital photo printers somewhere in the quarter-million each range, each of which runs on Windows 2000, will only ever run on Windows 2000, and won’t even use the newest service pack at that. They have a service life that will extend for another five years at least.)
“Due diligence” is only part of it. Experience and a very healthy dose of cynicism are absolutely required for cutting through the FUD and the layers of “but NEWER IS BETTER” that you will receive not only from vendors, but rabid geeks and management types as well.
Newer may sometimes be better. Newer is however always a set of lies, damned lies, statistics, bugs, patches, incompatibilities, yet more lies, regression and /progression/ testing nightmares waiting to happen.