Binary interfaces in component development

Use shared objects but don't get binary interfaces?

Intelligent flash storage arrays

Part 1: Compatibility is a huge problem in software development. It's often cited as an argument against Linux; there's no guarantee of forward compatibility to ensure the applications of today will work in the Linux of 2010.

Multinational corporations such as Microsoft and Sony spend millions trying to ensure that new versions of platforms work with the hardware and software of the precedent.

Bizarrely, the issue comes from our strongest mental asset; the capacity to divide a problem into smaller pieces of manageable size. As increasingly complex problem spaces are modelled, the amount of these smaller manageable components grows and it itself begins to become unmanageable. Releases and updates means each component has different versions and testing each component with all possible combinations of other components would be like trying to brute force the lottery result.

Unfortunately, the evolutionary process that gave us the wonderful ability to decompose a problem neglected to furnish us with a corresponding ability to configuration manage the results.

Here at Reg Developer, we're not content to suffer such evolutionary oversights. However, we're going to fight our limitations by taking a good look at where problems of binary compatibility come from. Our examples are taken from the world of C & C++, but analogous issues exist in any language that supports component development.

C++ is simply an easy target because it provides so many areas where care must be taken. By the end of the article we'll have identified how even a minor change can break binary compatibility and we will see the techniques needed to investigate such incompatibilities.

Compatibility of components is a question that relates more to the compiled code than to the source code. When a library uses a class, function, or data that is defined in another library, we have a binary dependency. In each application there is an implicit binary interface between each binary component. If the library containing the definition changes, and the dependent library is not recompiled, than we risk undefined behavior at runtime; because, depending on the nature of the change, the two components have potentially conflicting views of a single construct.

Furthermore, the view in question is the object code view and some things that you mightn't expect to, can lead to incoherencies. As developers, we tend to think of this interface in the abstract modeling domain, as calls to member functions on an object defined in another library for example. This philosophy, however, doesn't consider subtleties arising from language features that blur the boundary between a class and its containing library.

These issues are illustrated in the following examples where we look at the havoc caused when the binary interface is inadvertently altered by seemingly innocuous changes. In our example we imagine a scenario where there's pressure to minimise releases of binaries. We may not expect the proposed changes to require rebuilding of binaries other than that one which contains the principal definitions, but in practice they will. There are two components; one contains the implementation of a class used by the other. For something so simple, in how many ways can we break the binary interface?

//// Component libSomelib.a
//// SomeClass.hpp
class SomeClass
    // lib functions
    int getAInLib() const;
    void setAInLib(int newVal);

// inlnie functions int getAInline() { return _a; } void setAInline(int newVal) { _a = newVal; }
private: int _a; };
// //// SomeClass.cpp // #include "SomeClass.h"
int SomeClass::getAInLib() const { return _a; }
void SomeClass::setAInLib(int newVal) { _a = newVal; }
// // Commands g++ -c SomeClass.cpp && ar r libSomelib.a SomeClass.o
//// Component main
//// main.cpp
#include "SomeClass.h"
#include <iostream>
using namespace std;

int main()
    SomeClass x, y;
    cout << " getAInline a=" << x.getAInline() << endl;
    cout << " getAInlib a=" << x.getAInLib() << endl;

    cout << " getAInLine a=" << y.getAInline() << endl;
    cout << " getAInLib a=" << y.getAInLib() << endl;

// Commands to compile and execute
g++ -I../somelib main.cpp -L../somelib -lSomelib && ./a.out

// Output when it all works as expected
getAInline a=10
getAInlib a=10
getAInLine a=13
getAInLib a=13

So, what would happen if we add a member variable to SomeClass? We're adding something new, not changing something that exists so we might be tempted that this shouldn't break any binary interface. So, if code doesn't use the new variable does it need to be linked against the library containing the variable? Let's modify the data declarations as follows and recompile the executable without recompiling the library.

// We add member variable _b to class SomeClass
    int _b, _a;

// We recompile the executable, without recompiling the library, and 
// execute
g++ -I../somelib main.cpp -L../somelib -lSomelib && ./a.out
getAInline a=10
getAInlib a=4197488
getAInLine a=0
getAInLib a=13

When we recompile we notice that there are no linker errors, but the output is not correct.

We have probably all heard at one point or another that when code compiles and links you can be sure there are no underlying problems. This example shows that this isn't always the case; code in the library has a different memory representation for the object than code in the executable and, consequently, we get garbage output.

This occurs because member data is accessed relative to the base location of an object in memory; each member variable is located at a different offset from this base location. The code in the library was compiled with one set of member declarations, and hence one set of offset values. The code in the executable was compiled with different member declarations, i.e. different offsets, and consequently the inline functions give different results than their library defined counterparts.

We can extrapolate from this that any changes to inline function definitions, member data, and template function definitions are going to wreak havoc on the binary interface. It's safe to say that any change to a header file visible to dependent components will probably necessitate recompilation and a re-release of dependent components. Even if the rebuild isn't strictly necessary it's probably faster to do it than to determine that it wasn't necessary.

However, rebuilding isn't always cheap and easy. What do you do if the dependent components are managed by different project teams, different departments, or even different companies? One way (there are others) is the department-wide email: "I'm checking in my change, everyone update quickly and recompile."

The issue occurs because the logical and physical definitions of the class are not in the same place. Logically, the class and its member functions are defined in the component Somelib and each function has only one definition. However, some members of SomeClass are not physically defined in this library; the inline functions are absent from the library but are present in the executable.

Furthermore, because of language features such as generated constructors there are inline functions that aren't obvious. That's not to mention the template instantiations that generate code for specialisations of templates defined in any number of different components. This disparity between the binary location of a definition and its logical counterpart is a source of problems in C & C++ that take expertise to avoid and time to resolve.

Here, we've seen a simple example but who hasn't encountered "unresolved external symbol" both in development and with third party components? Similarly, it would be nice to say that runtime crashes on windows are rare, but I've seen enough "pure virtual function call" critical errors to recognise an issue.

We need componentised development, not least because it allows us to divide and conquer. Most would agree that avoidance of this problem by going back to monolithic executables isn't really an option. However, it's important to recognise that there is a problem coming from the domain shift from logical and conceptual source code to componentised binary code - that supports interoperability and modern language features.

In the next part of this series, we look at how to manage the issues that come from this domain shift, how to minimise incidents of broken binary interfaces and the tools needed to investigate binary problems when they occur. We'll take a critical look at Microsoft's COM and the sonames (special shared library names - see here) of Linux; and at what can be done in the development environment to allow components to evolve without giving developers sleepless nights. ®

Top 5 reasons to deploy VMware with Tegile

More from The Register

next story
Preview redux: Microsoft ships new Windows 10 build with 7,000 changes
Latest bleeding-edge bits borrow Action Center from Windows Phone
Google opens Inbox – email for people too thick to handle email
Print this article out and give it to someone tech-y if you get stuck
Microsoft promises Windows 10 will mean two-factor auth for all
Sneak peek at security features Redmond's baking into new OS
UNIX greybeards threaten Debian fork over systemd plan
'Veteran Unix Admins' fear desktop emphasis is betraying open source
Entity Framework goes 'code first' as Microsoft pulls visual design tool
Visual Studio database diagramming's out the window
Google+ goes TITSUP. But WHO knew? How long? Anyone ... Hello ...
Wobbly Gmail, Contacts, Calendar on the other hand ...
DEATH by PowerPoint: Microsoft warns of 0-day attack hidden in slides
Might put out patch in update, might chuck it out sooner
Redmond top man Satya Nadella: 'Microsoft LOVES Linux'
Open-source 'love' fairly runneth over at cloud event
prev story


Choosing cloud Backup services
Demystify how you can address your data protection needs in your small- to medium-sized business and select the best online backup service to meet your needs.
Forging a new future with identity relationship management
Learn about ForgeRock's next generation IRM platform and how it is designed to empower CEOS's and enterprises to engage with consumers.
Security for virtualized datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.
Reg Reader Research: SaaS based Email and Office Productivity Tools
Read this Reg reader report which provides advice and guidance for SMBs towards the use of SaaS based email and Office productivity tools.
Storage capacity and performance optimization at Mizuno USA
Mizuno USA turn to Tegile storage technology to solve both their SAN and backup issues.