This article is more than 1 year old

Aggregates: the not-so-forgotten DBA issue

Reducing aggregates can relieve the headache

Aggregates are probably the second biggest headache for database administrators in data warehouses after indexes and the tuning thereof.

However, while there has been lots of discussion about indexes there has been very little about aggregates. For example, the data warehouse appliance vendors remove indexes more or less completely, while the traditional suppliers have incrementally added more features to help the DBA with indexes and tuning, so that advisors will now suggest the building of new indexes when it is appropriate and then build them for you automatically.

Of course, the removal of indexes makes the database storage requirements significantly smaller, which is a major additional benefit but, nevertheless, apart from materialised views (which are the best part of a decade old) not much has been done to help DBAs with aggregates.

Perhaps I had better explain what aggregates are. Put simply, they are what it says on the tin. For example, the aggregate of all sales in a particular store over a particular period. In other words, exactly what you might store in an OLAP cube. However, for a variety of reasons (which we need not go into now), it is often advantageous to store these aggregates directly within the data warehouse and without using OLAP technology such as Analysis Services or Essbase.

What this means for the DBA is that he or she has to define and maintain all the hierarchies and dimensions along which aggregates have to be calculated. This is not only complex to establish in the first place, it is also a major ongoing headache. For example, every time a new product is launched or a new store is opened, or there is a company re-organisation, all the relevant aggregates have to be re-defined and amended.

What got me thinking about this was Netezza's recent user conference. Now, I was already aware that a number of companies have implemented a data warehouse appliance purely for the purposes of calculating aggregates as a front-end to a Teradata warehouse, but it came as more of a surprise to hear that a number of companies, of which Catalina Marketing and Carphone Warehouse are examples, have stopped using aggregates altogether as a result of implementing a Netezza solution.

Why? Simply, because Netezza performed so well for the relevant queries that the companies no long felt it necessary to pre-calculate them.

It turned out that there was another advantage as well. More than one company at the conference reported that they had a longer than expected testing cycle when they first implemented Netezza. Why? Because the results they were getting were different from those that they had previously got. I spoke to one of these companies about the reasons. After investigation, it turned out that the aggregates they had previously used were sometimes incorrect.

And if you think about this, it is hardly surprising: with a highly complex set of dimensions and hierarchies, not to mention all that maintenance, it is not unlikely that error will creep in at some point. Interestingly, the company also remarked that although the database it was previously using was supposed to be aggregate aware, in practice it found that the optimiser did not always use the aggregates that were in place.

To conclude: data warehouse appliances in general, and Netezza in particular, allow you to reduce or eliminate the use of aggregates if you want to. I would not go so far as to recommend their elimination (it will depend on circumstances), but the ability to minimise them should be a boon for DBAs.

Copyright © 2006, IT-Analysis.com

More about

TIP US OFF

Send us news


Other stories you might like