Feeds

DB2: the Viper is coming

More of a King Cobra, really

Combat fraud and increase customer satisfaction

Comment The next release of IBM's DB2 (for both z series and distributed systems), which is code-named ‘Viper’, will be generally available in the not too distant future: “mid-summer” for distributed systems, according to IBM. It is therefore appropriate to consider some of the new features it will introduce, and its impact on the market.

Of course, the biggest feature of Viper is that it includes an XML storage engine as well as a relational one. I have gone into some depth discussing the technology underpinning this on previous occasions and I will not repeat myself.

However, it is worth pointing out that this doesn't just mean that you can use either XQuery or SQL to address the database, and it doesn't just mean that you can combine SQL and XML data within the same query—it also has a direct impact on performance, both in terms of the database itself and in the development of facilities that use XML storage. For example, performance comparisons by early adopters of Viper indicate performance gains on queries of 100 times or more, development benefits of between four and 16 times (depending on whether the comparison was with a character large object or shredded data), the ability to add fields to a schema in a matter of minutes as opposed to days, and so on.

However, XML support is by no means the only significant feature of Viper. For general-purpose use, perhaps the next most significant capability is the compression that will be provided. Now, null and default value compression, index compression for multi-dimensional clustering and back-up compression are all available pre-Viper but in Viper there is also row compression.

Effectively, this works by having multiple algorithms that work with different datatypes (by column) and by looking for patterns that can be tokenised, stored once and accessed by dictionary. According to IBM this results in typical savings of between 35–80 per cent depending on the data being compressed. In particular, there are special facilities for SAP built into the release, so that the savings in SAP environments should be at the higher end of these expectations.

You might ask what the overhead of using compression is? After all, the act of compressing and de-compressing the data takes time. However, buffer pools are also compressed, which means that more data can be held in memory, so there is less need for I/O. As a result, applications will often actually be speeded up because the reduction in I/O more than offsets the compression overhead. Neat.

Note, however, that compression only applies to relational data in this release.

The next big deal is the introduction of range partitioning. Now, you wouldn't think that range partitioning was of major significance. Indeed, you might think that IBM was late in delivering it, since many other vendors have had it for years. However, it is not just the range partitioning that is important, nor even that you can use it for sub-partitions along with the existing hash capabilities. No, it is the combination of both of these along with multi-dimensional clustering that is important: in other words you can distribute your data using hashing, sub-partition it by range and then organise those sub-partitions by dimension, while contiguously storing relevant data in order to minimise I/O.

And talking about distributing data, in this release IBM has extended its data managed storage, though the story, which started with the current release, is not yet complete. Basically, the idea here is that the database will support different storage types (for example, disk drives of different speeds) and you can define policies to assign particular data elements to different storage types. In other words, IBM is building ILM (information lifecycle management) directly into the database. While it has not formally stated as such this is clearly the direction in which the company is headed.

Since we are on the topic of different hardware configurations, another new feature is that the database will automatically recognise the hardware configuration while it is installing and it will automatically set defaults (for example, for self-tuning memory, the configuration advisor and so on) accordingly. The software will similarly recognise if this is an SAP system and set defaults accordingly.

Along with this, as you might expect, there are a number of enhanced and extended autonomic features. One I particularly like is that utilities, such as table re-organisation, backup or runstats (all of which can be automated after the input of initial parameters) can be throttled. That is, you can set these to run dependent on how much priority they have relative to user performance. Thus you could insist that re-organisation is really important or, at the other end of the scale you could state that it must have no impact on live performance, or anywhere in-between.

Other features include the removal of Tablespace limits; label-based access control, which allows you to implement hierarchical security at row level; a new Eclipse-based DB2 Developer Workbench (replacing the previous DB2 Development Center) with full XML support; and a Visual XQuery Builder, amongst others.

How much impact will Viper have? There are a lot of applications (more than many companies realise) that need to combine XML and SQL data, and IBM is about to have a clear lead in the market in these areas. Then add Viper's SAP-specific characteristics: even with the previous release, DB2 was increasing its share of the SAP market and it has picked up not just new customers but those migrating from other platforms—this trend is likely to continue. On top of that, compression will reduce the total cost of ownership as will, in their own ways, the new automated management features and the automatic storage support. Finally, consider the performance benefits of adding range partitioning to multi-dimensional clustering for query environments.

To answer my question: how much impact will Viper have? A lot: less a viper more of a King Cobra.

Copyright © 2006, IT-Analysis.com

High performance access to file storage

More from The Register

next story
This time it's 'Personal': new Office 365 sub covers just two devices
Redmond also brings Office into Google's back yard
Batten down the hatches, Ubuntu 14.04 LTS due in TWO DAYS
Admins dab straining server brows in advance of Trusty Tahr's long-term support landing
Microsoft lobs pre-release Windows Phone 8.1 at devs who dare
App makers can load it before anyone else, but if they do they're stuck with it
Half of Twitter's 'active users' are SILENT STALKERS
Nearly 50% have NEVER tweeted a word
Oh no, Joe: WinPhone users already griping over 8.1 mega-update
Hang on. Which bit of Developer Preview don't you understand?
Internet-of-stuff startup dumps NoSQL for ... SQL?
NoSQL taste great at first but lacks proper nutrients, says startup cloud whiz
Windows 8.1, which you probably haven't upgraded to yet, ALREADY OBSOLETE
Pre-Update versions of new Windows version will no longer support patches
Microsoft TIER SMEAR changes app prices whether devs ask or not
Some go up, some go down, Redmond goes silent
Ditch the sync, paddle in the Streem: Upstart offers syncless sharing
Upload, delete and carry on sharing afterwards?
prev story

Whitepapers

Designing a defence for mobile apps
In this whitepaper learn the various considerations for defending mobile applications; from the mobile application architecture itself to the myriad testing technologies needed to properly assess mobile applications risk.
3 Big data security analytics techniques
Applying these Big Data security analytics techniques can help you make your business safer by detecting attacks early, before significant damage is done.
Five 3D headsets to be won!
We were so impressed by the Durovis Dive headset we’ve asked the company to give some away to Reg readers.
The benefits of software based PBX
Why you should break free from your proprietary PBX and how to leverage your existing server hardware.
Securing web applications made simple and scalable
In this whitepaper learn how automated security testing can provide a simple and scalable way to protect your web applications.