Microsoft vs. Teradata
Data Warehousing – there really isn't just one answer
Column Microsoft and Teradata are both significant players in the BI market but they have wildly different approaches to the challenges of extracting information from data. The reason lies in the fact that the two companies elected to solve two very different, but equally intractable, computational problems in order to get their BI systems to perform well.
Two different approaches
Business Intelligence is a complex area and generalisations are notoriously imprecise, but without generalisations discussions become book length, so let’s generalise a little.
BI is about extracting information from data. The data in most enterprises is distributed across multiple transactional systems (Finance, Sales, HR etc.) so we have to pull it into one place before we can analyse it.
A wish list for an efficient BI system looks something like this:
- Rapid movement of data from source systems to analytical system
- Easy auditing of data
- Minimum number of copies of the data
- Rapid analytical queries (2-3 seconds)
- Users presented with an ‘analytical view’ of data
As a general rule the more copies of the data, the more difficult it is to audit, so Points 2 & 3 are somewhat linked. Despite its position at number 4, rapid analytical querying is very important: an analytical query may be showing information that results from the aggregation of five billion rows in a source system, yet it must return the answer in two to three seconds. Point five covers the requirements of the end users: they certainly don’t want to see tables in a relational database; they want to work with dimensions and measures, or some near equivalent.
Given this common wish list, how did Microsoft and Teradata end up with such different strategies?
In Teradata’s world (shown on the left of Figure 1, above), the extracted and cleaned data is placed in a central store, known as an Enterprise Data Store or (these days) Enterprise Data Warehouse (EDW). There it is held as a relational structure and all the analytical queries are run directly against the data in the EDW.
In the Microsoft world (the right hand side of Figure 1), data is placed in a central store or data warehouse which is also typically structured as relational tables. However subsets of the data are then moved from the warehouse into data marts, restructured as multi-dimensional data, and it is against these data marts that queries are run.
These two approaches are radically different because the two companies have chosen to solve the overall problem of BI by solving two different computational problems – both of which have been serious thorns in the side of commercial computing since the mid 1980s.
The age-old problem Teradata addressed is simple to express – it is very difficult to run fast analytical queries against a relational structure.
Teradata solved this problem using a mix of parallel hardware and innovative software, not only solving the problem for small data sets but providing a solution that scales to truly massive data sets.
Once you solve this problem, then a side effect is that you can keep the BI structure very simple. In turn, that means that the majority of the wish list is automatically satisfied; indeed points 1 - 4 are natural side effects of the solution.
The data only moves once, so the delays are minimised. Only two copies of the data are held, one in the originating source systems and one in the EDW, so auditing is about as easy as it is going to get.
And the final wish list point? In order to hide the complexity of the relational store, Teradata has placed a logical layer between the user and the EDW or EDS data structure (see Figure 2, below). This translates the relational views of the data into analytical views so the users never have to see the relational structure.
Sponsored: Today’s most dangerous security threats