This article is more than 1 year old

Next generation BI - Part two

Technical issues

Comment In the previous article in this series, I outlined some of the major features that you want to see in a next generation BI solution.

Most of this is not complex: new visualisation capabilities, integration with search and so on are not, at least in principle, difficult, though companies providing sophisticated visualisation capabilities may need to provide default functionality for users that do not have high powered graphics cards.

However, where there is a major issue is in being able to support any ad hoc query against any data at any time (and in context).

There are two reasons for this. The first is that conventional OLAP and ROLAP implementations do not understand the relationships that exist between the data. Or rather, they do, but only implicitly.

What I mean is that in a relational database, for example, relationships are defined through the use of primary and foreign keys and in the database schema. In other words, relationship data is built into the structure. The same is true of OLAP cubes, in which dimensions and hierarchies are fundamental to the definition of those cubes. So, relationships are hard-wired by the developer: if relationships have not been defined (and many are not) then you cannot enquire against them, and this is precisely the limiting factor in today's generation of BI products.

To resolve this issue, relationship information needs to be abstracted from the data to which it relates and it needs to be stored separately (though it may be, perhaps partly, replicated in the database structure) so that relationships can be determined dynamically. In practice, this means you must be able to discover relationships as the data is loaded into the database or data warehouse or, where the data has already been loaded, then after the event.

There is another step that is required. Determining that there is a relationship between two pieces of data that you can exploit is one thing - actually getting that data may be another. The reason for this is that conventional BI tools work against aggregated data. That is, sales are summarised by product by store, say, and it is this information that is stored in your OLAP cubes.

However, if you want to report on something the developer was not expecting, the relevant aggregates may not be available. Therefore, any next generation solution must have one of two features: either it must store all possible aggregates, which in turn implies that all data must be stored within a single cube as opposed to the multiple cubes that are typically used today; or it must be able to calculate new aggregates on the fly and, in order to ensure decent performance, this means in memory.

In practice, since you may wish to compare data that comes from different data sources (which implies a requirement for enterprise information integration) you will always need to have the ability to calculate some aggregates on the fly and, therefore, in memory.

On a slightly different topic: we are increasingly seeing the need for real-time operational BI. Currently, this is often deployed using a separate platform from the traditional query environment. This is because traditional environments have static data and dynamic queries whereas real-time BI has static queries and dynamic data (events). Needless to say, the underlying platform should be invisible to the user who should be able to use the same set of query capabilities regardless of the underlying platform. Indeed, you should be able to combine information from the two environments.

Finally, there is the question of migration from, or reuse of, existing environments. As we shall see in the last article in this series, some of the technology needed to support this is precisely the same software needed to extract relationship information from OLAP cubes and relational databases.

Copyright © 2006, IT-Analysis.com

More about

TIP US OFF

Send us news


Other stories you might like