Feeds

Mixed messages for master data management

IBM conference breakdown

Intelligent flash storage arrays

Opinion IBM's first European user conference on MDM (master data management) in Barcelona was certainly interesting, but I felt that the results were mixed.

For example, some users clearly felt the conference was useful, while others were disappointed. One investment bank I talked to felt the content was insufficiently technical and, in particular, that it did not address either scalability issues or the complex, heterogeneous environments which most companies have to manage. Still, I suppose you can't please everybody all of the time.

Another criticism I heard from several attendees was that the Gartner analyst who was supposed to set a framework for the conference was speaking at the end of the first day rather than the beginning.

I had rather more serious reservations about this particular presentation as I disagree with his segmentation of the market, and did not feel that a sufficient distinction was made between MDM on the one hand and instances of MDM such as CDI (customer data integration), GSM (global supplier management) and PIM (product information management) on the other.

I will start with this second issue. To me, CDI, PIM and the like are subsets of master data management (MDM). MDM is, essentially, non domain-specific. There are two reasons why this is important. The first is that if vendors, users, pundits et al continue to treat CDI, GSM and similar applications as separate and distinct then we will end up with exactly the same siloed applications that have been the bane of IT for the last two decades.

Suppose, for example, that you want to implement a master data management solution for SLAs (service level agreements). That would involve a customer, a supplier, various services, a contract and metrics. In other words, such an application would span multiple master data domains, some of which are about people (customers, suppliers) and some of which are about things (services, the contract itself, metrics).

If we make the assumption that existing CDI solutions can be easily extended to support suppliers and that PIM solutions can encompass contracts and services (there seems to be a consensus that PIM and CDI require significantly different capabilities) then what we don't want to have to do is to build an application that has to span two different systems, putting us straight back into the EAI (enterprise application integration) stuff that we'd rather avoid.

So, given that "people" and "things" are distinct, what is needed is a common set of services that support as much as possible of these in a single platform, with only a very specific and minimised set of differentiated capabilities on top of the platform that relate to CDI, PIM or whatever. This platform then becomes the MDM base upon which application instances are implemented.

As far as how you break down the market is concerned, I felt that the model presented was too simplistic. In our view, there are three approaches that you might take: a registry, which is where you have a reference mapping to the various participating applications; a repository, where you extend the registry by including a mapping to the elements that make up the "best record"; and a hub, where the best record is actually held within the hub. This contrasts with the model that was presented, which only differentiated between hubs (albeit with sub-classes) and registries. A more detailed discussion of our approach is included in Harriet Fryman's recently published MDM Report.

The speaker at the conference spent very little time discussing registries, but the importance of registries and repositories is that they are much less disruptive (and less expensive) to implement. For example, Purisma claims a typical implementation time of just four to six weeks compared to the months, or even years, that you might take over a hub. The scalability issues referred to above also go away.

It is our belief that what users would really like is a way to start with a registry (or repository) and the ability to migrate, progressively, to more complex implementations.

Copyright © 2006, IT-Analysis.com

Top 5 reasons to deploy VMware with Tegile

More from The Register

next story
Preview redux: Microsoft ships new Windows 10 build with 7,000 changes
Latest bleeding-edge bits borrow Action Center from Windows Phone
Google opens Inbox – email for people too thick to handle email
Print this article out and give it to someone tech-y if you get stuck
Microsoft promises Windows 10 will mean two-factor auth for all
Sneak peek at security features Redmond's baking into new OS
UNIX greybeards threaten Debian fork over systemd plan
'Veteran Unix Admins' fear desktop emphasis is betraying open source
Entity Framework goes 'code first' as Microsoft pulls visual design tool
Visual Studio database diagramming's out the window
Google+ goes TITSUP. But WHO knew? How long? Anyone ... Hello ...
Wobbly Gmail, Contacts, Calendar on the other hand ...
DEATH by PowerPoint: Microsoft warns of 0-day attack hidden in slides
Might put out patch in update, might chuck it out sooner
prev story

Whitepapers

Choosing cloud Backup services
Demystify how you can address your data protection needs in your small- to medium-sized business and select the best online backup service to meet your needs.
Forging a new future with identity relationship management
Learn about ForgeRock's next generation IRM platform and how it is designed to empower CEOS's and enterprises to engage with consumers.
Security for virtualized datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.
Reg Reader Research: SaaS based Email and Office Productivity Tools
Read this Reg reader report which provides advice and guidance for SMBs towards the use of SaaS based email and Office productivity tools.
Storage capacity and performance optimization at Mizuno USA
Mizuno USA turn to Tegile storage technology to solve both their SAN and backup issues.