Grand Unification Theory
Data integration, Einstein style
For those of you not up to speed with your physics, the grand unification theory is the idea that the four forces of nature - the weak and strong nuclear forces, electromagnetism and gravity - can all be explained by a single theory. To date, the set of equations that combines all of these (if it exists) has eluded scientists. However, partial unifications have been achieved: for example, the weak and strong nuclear forces can now both be explained as different aspects of a greater whole.
An analogous position exists within the realms of data management.
As regular readers will know, around the turn of the year I was writing about the synthesis of ETL (extract, transform and load), data profiling and data cleansing, with a clear trend being established as fewer and fewer companies are left doing any single one of these activities. The recent acquisition of Avellino by Trillium only confirms the direction of this market.
At the same time, the ETL vendors are re-positioning themselves as data integration providers rather than just as ETL tools. This is, in part, because of the inclusion of additional profiling and cleansing facilities, but it is also a recognition of the fact that there are lots of times when you want to move data that has nothing to do with data warehousing.
More recently, I have been writing about data federation and EII (enterprise information integration), noting that the former (because it allows updates) is a superset of the latter. Further, I have suggested that data federation and EAI (enterprise application information) will also merge, with data federation again being the subsuming technology (though, no doubt, our collective mania for new buzzwords will mean that we call it something different).
Now, the question arises as to whether it is feasible to consider the merging of data integration and data federation. Let's start by considering the synergies: both technologies need access to diverse data sources both legacy and modern, relational and non-relational; increasingly, they both need access to real-time data, which is typically achieved by means of capturing changes in database logs; they both need to be able to perform transformations against the data they are retrieving; and they both need the facilities of data profiling and data cleansing to ensure that the data itself is in a fit state to process.
In other words, data federation and data integration products use the same connectors and the same ancillary tools, and they access the same data sources in the same ways. The only real differences between data federation and data integration are that the procedures involved in the former always take place in real-time (even if the data itself may not be that up-to-date) whereas that is only sometimes the case for data integration; and that data federation typically delivers its results to a target application, while data integration is usually addressed at physical targets or, possibly, message queues. Conversely, data federation retrieves information from data sources. While data integration does this too, products in this space tend to have strengths in understanding ERP and similar applications, which data federation products do not have.
In other words, data integration is potentially a superset of data federation. Moreover, the ERP capabilities provided by ETL vendors could be particularly useful in data federation providing EAI capability.
You will note that I have left out one major point: ETL tools move the data and data federation products don't. That is, ostensibly, the big difference between them. However, it's not true. There are lots of ETL vendors (SAS, GoldenGate and Hummingbird to name just three), that allow you to process data on the source before "extracting" it. If that means processing the joins that you need for your query then how does it differ from data federation?
Grand unification? Yes. It's called data integration.
Sponsored: Hyper-scale data management