Feeds

Microsoft vs. Teradata

Data Warehousing – there really isn't just one answer

Choosing a cloud hosting partner with confidence

Microsoft’s approach

The problem that Microsoft elected to solve was that of producing an efficient multi-dimensional database engine that was fast and also cured the OLAP data explosion problem. This is another non-trivial problem but solving it, and using the resulting technology in the data marts, automatically solves Points 4 & 5 in our wish list. The data can be aggregated and that gives the blistering speed that’s required. In addition, multi-dimensional data means that users automatically get a hierarchical, dimensional and measured view of the data.

On the other hand, Microsoft’s approach means that you essentially accept that load times will be slower and auditing more of a challenge because of the proliferation of extra copies of data in the data marts. You also accept that the process will burn up more disk space.

However, supporters of this approach argue that the first three wish list points are not, in practice, much of an issue. Disk space and CPU cycles are cheap, auditing can be automated and that Microsoft is developing techniques such as proactive caching that essentially compensate for the delays in organising the data, bringing real-time analysis ever closer.

So, which is better?

One point is reasonably clear. If you have a need for a BI system that holds an awesomely large set of data, you will certainly be talking to Teradata. The company can field an impressive list of customers in the ‘monstrously, overwhelmingly, huge’ category. So, if we are simply going to rate the two strategies on ‘My BI system can be bigger than yours’ then Teradata wins.

But such a rating is nonsense for most enterprises. By definition, the average enterprise has an average BI requirement and both Microsoft and Teradata can provide a solution here. (Actually, assuming the skewed distribution that probably exists, we could even say that the modal company has a below-average requirement, but let’s not get picky). So both of these BI vendors have an appropriate technical solution for most companies and in practice, there seems genuinely to be very little overlap. Hermann Wimmer (Teradata’s Vice President of EMEA) told me that Teradata tends to focus only on the largest companies. Microsoft’s mantra has, for years, been “BI for the masses”.

In terms of the technologies, it is tempting to extrapolate that Microsoft couldn’t solve the problem of analytical access to relational data and therefore chose to ‘work around’ it. This is doubtless an oversimplification because, whilst it is true that this particular problem is known to be difficult to solve, it was also known to be soluble by the time Microsoft took a serious commercial interest in BI (Teradata had already done so). So, given its huge resources, Microsoft could have cracked the problem. In the same way, I have no doubt that Teradata could ‘do’ a multi-dimensional database engine if it elected to address the problem.

In addition, Teradata’s systems have always been ‘reassuringly expensive’. So Microsoft may well have rejected the highly specialised solution (that works for all conceivable sizes of data) and elected to pursue a line that offers a much more cost-effective solution for the majority of potential customers.

The bottom line is that while Teradata solution fits all, and sometimes may be the only feasible solution; Microsoft’s is likely to be much more cost effective for the majority.

PS

I am quite well aware that the relational model is a logical model and that it is therefore nonsense to imply that relational structures are inherently slow for the simple reason that the model says nothing about implementation on disk. The reason for the poor analytical performance of relational systems lies in the way that most RDBMS engine designers have elected to store their data structures on disk; it doesn’t lie with the relational model itself. Nevertheless, it remains true that on comparable hardware, analytical access to multi-dimensional data is usually orders of magnitude faster than the same access to data stored in the current crop of mainstream relational engines.

Beginner's guide to SSL certificates

More from The Register

next story
Nexus 7 fandroids tell of salty taste after sucking on Google's Lollipop
Web giant looking into why version 5.0 of Android is crippling older slabs
Be real, Apple: In-app goodie grab games AREN'T FREE – EU
Cupertino stands down after Euro legal threats
Download alert: Nearly ALL top 100 Android, iOS paid apps hacked
Attack of the Clones? Yeah, but much, much scarier – report
Microsoft: Your Linux Docker containers are now OURS to command
New tool lets admins wrangle Linux apps from Windows
Bada-Bing! Mozilla flips Firefox to YAHOO! for search
Microsoft system will be the default for browser in US until 2020
prev story

Whitepapers

Why and how to choose the right cloud vendor
The benefits of cloud-based storage in your processes. Eliminate onsite, disk-based backup and archiving in favor of cloud-based data protection.
Forging a new future with identity relationship management
Learn about ForgeRock's next generation IRM platform and how it is designed to empower CEOS's and enterprises to engage with consumers.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
The hidden costs of self-signed SSL certificates
Exploring the true TCO for self-signed SSL certificates, including a side-by-side comparison of a self-signed architecture versus working with a third-party SSL vendor.
Storage capacity and performance optimization at Mizuno USA
Mizuno USA turn to Tegile storage technology to solve both their SAN and backup issues.