Feeds

DB2: the Viper is coming

More of a King Cobra, really

Providing a secure and efficient Helpdesk

Comment The next release of IBM's DB2 (for both z series and distributed systems), which is code-named ‘Viper’, will be generally available in the not too distant future: “mid-summer” for distributed systems, according to IBM. It is therefore appropriate to consider some of the new features it will introduce, and its impact on the market.

Of course, the biggest feature of Viper is that it includes an XML storage engine as well as a relational one. I have gone into some depth discussing the technology underpinning this on previous occasions and I will not repeat myself.

However, it is worth pointing out that this doesn't just mean that you can use either XQuery or SQL to address the database, and it doesn't just mean that you can combine SQL and XML data within the same query—it also has a direct impact on performance, both in terms of the database itself and in the development of facilities that use XML storage. For example, performance comparisons by early adopters of Viper indicate performance gains on queries of 100 times or more, development benefits of between four and 16 times (depending on whether the comparison was with a character large object or shredded data), the ability to add fields to a schema in a matter of minutes as opposed to days, and so on.

However, XML support is by no means the only significant feature of Viper. For general-purpose use, perhaps the next most significant capability is the compression that will be provided. Now, null and default value compression, index compression for multi-dimensional clustering and back-up compression are all available pre-Viper but in Viper there is also row compression.

Effectively, this works by having multiple algorithms that work with different datatypes (by column) and by looking for patterns that can be tokenised, stored once and accessed by dictionary. According to IBM this results in typical savings of between 35–80 per cent depending on the data being compressed. In particular, there are special facilities for SAP built into the release, so that the savings in SAP environments should be at the higher end of these expectations.

You might ask what the overhead of using compression is? After all, the act of compressing and de-compressing the data takes time. However, buffer pools are also compressed, which means that more data can be held in memory, so there is less need for I/O. As a result, applications will often actually be speeded up because the reduction in I/O more than offsets the compression overhead. Neat.

Note, however, that compression only applies to relational data in this release.

The next big deal is the introduction of range partitioning. Now, you wouldn't think that range partitioning was of major significance. Indeed, you might think that IBM was late in delivering it, since many other vendors have had it for years. However, it is not just the range partitioning that is important, nor even that you can use it for sub-partitions along with the existing hash capabilities. No, it is the combination of both of these along with multi-dimensional clustering that is important: in other words you can distribute your data using hashing, sub-partition it by range and then organise those sub-partitions by dimension, while contiguously storing relevant data in order to minimise I/O.

And talking about distributing data, in this release IBM has extended its data managed storage, though the story, which started with the current release, is not yet complete. Basically, the idea here is that the database will support different storage types (for example, disk drives of different speeds) and you can define policies to assign particular data elements to different storage types. In other words, IBM is building ILM (information lifecycle management) directly into the database. While it has not formally stated as such this is clearly the direction in which the company is headed.

Since we are on the topic of different hardware configurations, another new feature is that the database will automatically recognise the hardware configuration while it is installing and it will automatically set defaults (for example, for self-tuning memory, the configuration advisor and so on) accordingly. The software will similarly recognise if this is an SAP system and set defaults accordingly.

Along with this, as you might expect, there are a number of enhanced and extended autonomic features. One I particularly like is that utilities, such as table re-organisation, backup or runstats (all of which can be automated after the input of initial parameters) can be throttled. That is, you can set these to run dependent on how much priority they have relative to user performance. Thus you could insist that re-organisation is really important or, at the other end of the scale you could state that it must have no impact on live performance, or anywhere in-between.

Other features include the removal of Tablespace limits; label-based access control, which allows you to implement hierarchical security at row level; a new Eclipse-based DB2 Developer Workbench (replacing the previous DB2 Development Center) with full XML support; and a Visual XQuery Builder, amongst others.

How much impact will Viper have? There are a lot of applications (more than many companies realise) that need to combine XML and SQL data, and IBM is about to have a clear lead in the market in these areas. Then add Viper's SAP-specific characteristics: even with the previous release, DB2 was increasing its share of the SAP market and it has picked up not just new customers but those migrating from other platforms—this trend is likely to continue. On top of that, compression will reduce the total cost of ownership as will, in their own ways, the new automated management features and the automatic storage support. Finally, consider the performance benefits of adding range partitioning to multi-dimensional clustering for query environments.

To answer my question: how much impact will Viper have? A lot: less a viper more of a King Cobra.

Copyright © 2006, IT-Analysis.com

Internet Security Threat Report 2014

More from The Register

next story
UNIX greybeards threaten Debian fork over systemd plan
'Veteran Unix Admins' fear desktop emphasis is betraying open source
Netscape Navigator - the browser that started it all - turns 20
It was 20 years ago today, Marc Andreeesen taught the band to play
Redmond top man Satya Nadella: 'Microsoft LOVES Linux'
Open-source 'love' fairly runneth over at cloud event
Google+ goes TITSUP. But WHO knew? How long? Anyone ... Hello ...
Wobbly Gmail, Contacts, Calendar on the other hand ...
Chrome 38's new HTML tag support makes fatties FIT and SKINNIER
First browser to protect networks' bandwith using official spec
Admins! Never mind POODLE, there're NEW OpenSSL bugs to splat
Four new patches for open-source crypto libraries
Torvalds CONFESSES: 'I'm pretty good at alienating devs'
Admits to 'a metric ****load' of mistakes during work with Linux collaborators
prev story

Whitepapers

Forging a new future with identity relationship management
Learn about ForgeRock's next generation IRM platform and how it is designed to empower CEOS's and enterprises to engage with consumers.
Why and how to choose the right cloud vendor
The benefits of cloud-based storage in your processes. Eliminate onsite, disk-based backup and archiving in favor of cloud-based data protection.
Three 1TB solid state scorchers up for grabs
Big SSDs can be expensive but think big and think free because you could be the lucky winner of one of three 1TB Samsung SSD 840 EVO drives that we’re giving away worth over £300 apiece.
Reg Reader Research: SaaS based Email and Office Productivity Tools
Read this Reg reader report which provides advice and guidance for SMBs towards the use of SaaS based email and Office productivity tools.
Security for virtualized datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.