Feeds

Binary interfaces in component development

Use shared objects but don't get binary interfaces?

Boost IT visibility and business value

Part 1: Compatibility is a huge problem in software development. It's often cited as an argument against Linux; there's no guarantee of forward compatibility to ensure the applications of today will work in the Linux of 2010.

Multinational corporations such as Microsoft and Sony spend millions trying to ensure that new versions of platforms work with the hardware and software of the precedent.

Bizarrely, the issue comes from our strongest mental asset; the capacity to divide a problem into smaller pieces of manageable size. As increasingly complex problem spaces are modelled, the amount of these smaller manageable components grows and it itself begins to become unmanageable. Releases and updates means each component has different versions and testing each component with all possible combinations of other components would be like trying to brute force the lottery result.

Unfortunately, the evolutionary process that gave us the wonderful ability to decompose a problem neglected to furnish us with a corresponding ability to configuration manage the results.

Here at Reg Developer, we're not content to suffer such evolutionary oversights. However, we're going to fight our limitations by taking a good look at where problems of binary compatibility come from. Our examples are taken from the world of C & C++, but analogous issues exist in any language that supports component development.

C++ is simply an easy target because it provides so many areas where care must be taken. By the end of the article we'll have identified how even a minor change can break binary compatibility and we will see the techniques needed to investigate such incompatibilities.

Compatibility of components is a question that relates more to the compiled code than to the source code. When a library uses a class, function, or data that is defined in another library, we have a binary dependency. In each application there is an implicit binary interface between each binary component. If the library containing the definition changes, and the dependent library is not recompiled, than we risk undefined behavior at runtime; because, depending on the nature of the change, the two components have potentially conflicting views of a single construct.

Furthermore, the view in question is the object code view and some things that you mightn't expect to, can lead to incoherencies. As developers, we tend to think of this interface in the abstract modeling domain, as calls to member functions on an object defined in another library for example. This philosophy, however, doesn't consider subtleties arising from language features that blur the boundary between a class and its containing library.

These issues are illustrated in the following examples where we look at the havoc caused when the binary interface is inadvertently altered by seemingly innocuous changes. In our example we imagine a scenario where there's pressure to minimise releases of binaries. We may not expect the proposed changes to require rebuilding of binaries other than that one which contains the principal definitions, but in practice they will. There are two components; one contains the implementation of a class used by the other. For something so simple, in how many ways can we break the binary interface?

//
//// Component libSomelib.a
//// SomeClass.hpp
//
class SomeClass
{
public:
    // lib functions
    int getAInLib() const;
    void setAInLib(int newVal);


// inlnie functions int getAInline() { return _a; } void setAInline(int newVal) { _a = newVal; }
private: int _a; };
// //// SomeClass.cpp // #include "SomeClass.h"
int SomeClass::getAInLib() const { return _a; }
void SomeClass::setAInLib(int newVal) { _a = newVal; }
// // Commands g++ -c SomeClass.cpp && ar r libSomelib.a SomeClass.o
//
//// Component main
//// main.cpp
//
#include "SomeClass.h"
#include <iostream>
using namespace std;

int main()
{
    SomeClass x, y;
    x.setAInline(10);
    cout << " getAInline a=" << x.getAInline() << endl;
    cout << " getAInlib a=" << x.getAInLib() << endl;

    y.setAInLib(13);
    cout << " getAInLine a=" << y.getAInline() << endl;
    cout << " getAInLib a=" << y.getAInLib() << endl;
}

//
// Commands to compile and execute
g++ -I../somelib main.cpp -L../somelib -lSomelib && ./a.out

//
// Output when it all works as expected
getAInline a=10
getAInlib a=10
getAInLine a=13
getAInLib a=13

So, what would happen if we add a member variable to SomeClass? We're adding something new, not changing something that exists so we might be tempted that this shouldn't break any binary interface. So, if code doesn't use the new variable does it need to be linked against the library containing the variable? Let's modify the data declarations as follows and recompile the executable without recompiling the library.

//
// We add member variable _b to class SomeClass
private:
    int _b, _a;

//
// We recompile the executable, without recompiling the library, and 
// execute
g++ -I../somelib main.cpp -L../somelib -lSomelib && ./a.out
getAInline a=10
getAInlib a=4197488
getAInLine a=0
getAInLib a=13

When we recompile we notice that there are no linker errors, but the output is not correct.

We have probably all heard at one point or another that when code compiles and links you can be sure there are no underlying problems. This example shows that this isn't always the case; code in the library has a different memory representation for the object than code in the executable and, consequently, we get garbage output.

This occurs because member data is accessed relative to the base location of an object in memory; each member variable is located at a different offset from this base location. The code in the library was compiled with one set of member declarations, and hence one set of offset values. The code in the executable was compiled with different member declarations, i.e. different offsets, and consequently the inline functions give different results than their library defined counterparts.

We can extrapolate from this that any changes to inline function definitions, member data, and template function definitions are going to wreak havoc on the binary interface. It's safe to say that any change to a header file visible to dependent components will probably necessitate recompilation and a re-release of dependent components. Even if the rebuild isn't strictly necessary it's probably faster to do it than to determine that it wasn't necessary.

However, rebuilding isn't always cheap and easy. What do you do if the dependent components are managed by different project teams, different departments, or even different companies? One way (there are others) is the department-wide email: "I'm checking in my change, everyone update quickly and recompile."

The issue occurs because the logical and physical definitions of the class are not in the same place. Logically, the class and its member functions are defined in the component Somelib and each function has only one definition. However, some members of SomeClass are not physically defined in this library; the inline functions are absent from the library but are present in the executable.

Furthermore, because of language features such as generated constructors there are inline functions that aren't obvious. That's not to mention the template instantiations that generate code for specialisations of templates defined in any number of different components. This disparity between the binary location of a definition and its logical counterpart is a source of problems in C & C++ that take expertise to avoid and time to resolve.

Here, we've seen a simple example but who hasn't encountered "unresolved external symbol" both in development and with third party components? Similarly, it would be nice to say that runtime crashes on windows are rare, but I've seen enough "pure virtual function call" critical errors to recognise an issue.

We need componentised development, not least because it allows us to divide and conquer. Most would agree that avoidance of this problem by going back to monolithic executables isn't really an option. However, it's important to recognise that there is a problem coming from the domain shift from logical and conceptual source code to componentised binary code - that supports interoperability and modern language features.

In the next part of this series, we look at how to manage the issues that come from this domain shift, how to minimise incidents of broken binary interfaces and the tools needed to investigate binary problems when they occur. We'll take a critical look at Microsoft's COM and the sonames (special shared library names - see here) of Linux; and at what can be done in the development environment to allow components to evolve without giving developers sleepless nights. ®

5 things you didn’t know about cloud backup

More from The Register

next story
Why has the web gone to hell? Market chaos and HUMAN NATURE
Tim Berners-Lee isn't happy, but we should be
Apple promises to lift Curse of the Drained iPhone 5 Battery
Have you tried turning it off and...? Never mind, here's a replacement
Microsoft boots 1,500 dodgy apps from the Windows Store
DEVELOPERS! DEVELOPERS! DEVELOPERS! Naughty, misleading developers!
Eat up Martha! Microsoft slings handwriting recog into OneNote on Android
Freehand input on non-Windows kit for the first time
Linux turns 23 and Linus Torvalds celebrates as only he can
No, not with swearing, but by controlling the release cycle
Scratched PC-dispatch patch patched, hatched in batch rematch
Windows security update fixed after triggering blue screens (and screams) of death
This is how I set about making a fortune with my own startup
Would you leave your well-paid job to chase your dream?
prev story

Whitepapers

Best practices for enterprise data
Discussing how technology providers have innovated in order to solve new challenges, creating a new framework for enterprise data.
Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
Advanced data protection for your virtualized environments
Find a natural fit for optimizing protection for the often resource-constrained data protection process found in virtual environments.
How modern custom applications can spur business growth
Learn how to create, deploy and manage custom applications without consuming or expanding the need for scarce, expensive IT resources.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?