DB2: the Viper is coming

More of a King Cobra, really

Top 5 reasons to deploy VMware with Tegile

Comment The next release of IBM's DB2 (for both z series and distributed systems), which is code-named ‘Viper’, will be generally available in the not too distant future: “mid-summer” for distributed systems, according to IBM. It is therefore appropriate to consider some of the new features it will introduce, and its impact on the market.

Of course, the biggest feature of Viper is that it includes an XML storage engine as well as a relational one. I have gone into some depth discussing the technology underpinning this on previous occasions and I will not repeat myself.

However, it is worth pointing out that this doesn't just mean that you can use either XQuery or SQL to address the database, and it doesn't just mean that you can combine SQL and XML data within the same query—it also has a direct impact on performance, both in terms of the database itself and in the development of facilities that use XML storage. For example, performance comparisons by early adopters of Viper indicate performance gains on queries of 100 times or more, development benefits of between four and 16 times (depending on whether the comparison was with a character large object or shredded data), the ability to add fields to a schema in a matter of minutes as opposed to days, and so on.

However, XML support is by no means the only significant feature of Viper. For general-purpose use, perhaps the next most significant capability is the compression that will be provided. Now, null and default value compression, index compression for multi-dimensional clustering and back-up compression are all available pre-Viper but in Viper there is also row compression.

Effectively, this works by having multiple algorithms that work with different datatypes (by column) and by looking for patterns that can be tokenised, stored once and accessed by dictionary. According to IBM this results in typical savings of between 35–80 per cent depending on the data being compressed. In particular, there are special facilities for SAP built into the release, so that the savings in SAP environments should be at the higher end of these expectations.

You might ask what the overhead of using compression is? After all, the act of compressing and de-compressing the data takes time. However, buffer pools are also compressed, which means that more data can be held in memory, so there is less need for I/O. As a result, applications will often actually be speeded up because the reduction in I/O more than offsets the compression overhead. Neat.

Note, however, that compression only applies to relational data in this release.

The next big deal is the introduction of range partitioning. Now, you wouldn't think that range partitioning was of major significance. Indeed, you might think that IBM was late in delivering it, since many other vendors have had it for years. However, it is not just the range partitioning that is important, nor even that you can use it for sub-partitions along with the existing hash capabilities. No, it is the combination of both of these along with multi-dimensional clustering that is important: in other words you can distribute your data using hashing, sub-partition it by range and then organise those sub-partitions by dimension, while contiguously storing relevant data in order to minimise I/O.

And talking about distributing data, in this release IBM has extended its data managed storage, though the story, which started with the current release, is not yet complete. Basically, the idea here is that the database will support different storage types (for example, disk drives of different speeds) and you can define policies to assign particular data elements to different storage types. In other words, IBM is building ILM (information lifecycle management) directly into the database. While it has not formally stated as such this is clearly the direction in which the company is headed.

Since we are on the topic of different hardware configurations, another new feature is that the database will automatically recognise the hardware configuration while it is installing and it will automatically set defaults (for example, for self-tuning memory, the configuration advisor and so on) accordingly. The software will similarly recognise if this is an SAP system and set defaults accordingly.

Along with this, as you might expect, there are a number of enhanced and extended autonomic features. One I particularly like is that utilities, such as table re-organisation, backup or runstats (all of which can be automated after the input of initial parameters) can be throttled. That is, you can set these to run dependent on how much priority they have relative to user performance. Thus you could insist that re-organisation is really important or, at the other end of the scale you could state that it must have no impact on live performance, or anywhere in-between.

Other features include the removal of Tablespace limits; label-based access control, which allows you to implement hierarchical security at row level; a new Eclipse-based DB2 Developer Workbench (replacing the previous DB2 Development Center) with full XML support; and a Visual XQuery Builder, amongst others.

How much impact will Viper have? There are a lot of applications (more than many companies realise) that need to combine XML and SQL data, and IBM is about to have a clear lead in the market in these areas. Then add Viper's SAP-specific characteristics: even with the previous release, DB2 was increasing its share of the SAP market and it has picked up not just new customers but those migrating from other platforms—this trend is likely to continue. On top of that, compression will reduce the total cost of ownership as will, in their own ways, the new automated management features and the automatic storage support. Finally, consider the performance benefits of adding range partitioning to multi-dimensional clustering for query environments.

To answer my question: how much impact will Viper have? A lot: less a viper more of a King Cobra.

Copyright © 2006, IT-Analysis.com

Providing a secure and efficient Helpdesk

More from The Register

next story
Google+ goes TITSUP. But WHO knew? How long? Anyone ... Hello ...
Wobbly Gmail, Contacts, Calendar on the other hand ...
Preview redux: Microsoft ships new Windows 10 build with 7,000 changes
Latest bleeding-edge bits borrow Action Center from Windows Phone
UNIX greybeards threaten Debian fork over systemd plan
'Veteran Unix Admins' fear desktop emphasis is betraying open source
Microsoft promises Windows 10 will mean two-factor auth for all
Sneak peek at security features Redmond's baking into new OS
DEATH by PowerPoint: Microsoft warns of 0-day attack hidden in slides
Might put out patch in update, might chuck it out sooner
Google opens Inbox – email for people too stupid to use email
Print this article out and give it to someone techy if you get stuck
Redmond top man Satya Nadella: 'Microsoft LOVES Linux'
Open-source 'love' fairly runneth over at cloud event
prev story


Cloud and hybrid-cloud data protection for VMware
Learn how quick and easy it is to configure backups and perform restores for VMware environments.
A strategic approach to identity relationship management
ForgeRock commissioned Forrester to evaluate companies’ IAM practices and requirements when it comes to customer-facing scenarios versus employee-facing ones.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
Three 1TB solid state scorchers up for grabs
Big SSDs can be expensive but think big and think free because you could be the lucky winner of one of three 1TB Samsung SSD 840 EVO drives that we’re giving away worth over £300 apiece.
Security for virtualized datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.