Feeds

Virtualization and ILM 2006: Looking Back

Virtualization will continue to normalize across more areas

Beginner's guide to SSL certificates

One of the hottest topics in computing in 2006 was virtualization. Like many other trends before it, it had many definitions, many disguises, and the FUD factor was significant. Some advances were made, much confusion was added to the system by companies jumping on bandwagons or squandering precious marketing time wandering about the weeds of technical details, but some valuable ground was gained as well. This piece is not meant to be a detailed analysis of the year that was, but a way to look at how we got to today and what we expect for the coming year.

First there are two main areas of virtualization from a systems viewpoint. One is storage virtualization, which involves storage area networks (SANs), network attached storage (NAS), and various virtualization bits by storage companies. It also increasingly includes software. The other main area is system virtualization, which includes virtualization, of parts or all of a system, whether that is a client or a server. Much of the fuss around information lifecycle management (ILM) has died down, and several companies have dropped or scaled back messaging around the concept; ironically so, as many of the breakthroughs this year actually got us closer to being able to realize ILM visions. Perhaps it's just as well, though, as ILM meant something a little different to everyone who thought about it.

The two big areas of growth around virtualization in 2006 were software and management. One assumes that hardware is part of the picture, but although companies like Intel and AMD continue to make their products more accessible to various virtualization schemes, the real news was what vendors were doing with software and management capabilities.

All the usual suspects made announcements this year, including HP, IBM, HDS, and EMC, with EMC doing an awful lot of interesting things with Rainfinity and file virtualization as well as with Documentum, not to mention the popularity of VMware as a way to create virtual servers, and also providing other products such as ACE which is similar to Rainfinity. Microsoft created news by making its virtualization format technology available under its Open Specification Promise (OSP).

In general, the two areas for virtualization are deployment and management. Under the first area, getting everything to work together is important and sometimes a challenge. One of the chief reasons for virtualizing is to be able to use software on a platform other than that for which it was designed. If one cannot bring multiple platforms together, then virtualization is of limited efficacy and loses much of its appeal. Additionally, if overhead slows performance or the products have trouble scaling then this will limit the uptake of virtualization technologies. We expect the vendors will spend time making more devices and more versions of devices work together and we expect the scalability issue to be addressed. We also expect to hear an awful lot from the power and cooling lot this year as more efficient use of resources (another intended benefit of virtualization) is becoming more critical to many companies.

Scalability is usually an issue for large and growing installations. Hand in hand with that, management also becomes important. Managing multiple devices is important not only from the IT manager's point of view, but it is also important from the business view. Policy-based automation governed by business rules is the goal, and that means having good reporting capabilities as well as audit capabilities, and, equally as important, good security. The industry in general has treated reporting as a secondary feature, but the importance of compliance and governance is driving these features to the top. Those virtualization providers that still spend a lot of time talking about technical features will find themselves rewriting presentations to address these issues if they haven't done so already.

Intelligent flash storage arrays

More from The Register

next story
Azure TITSUP caused by INFINITE LOOP
Fat fingered geo-block kept Aussies in the dark
NASA launches new climate model at SC14
75 days of supercomputing later ...
Yahoo! blames! MONSTER! email! OUTAGE! on! CUT! CABLE! bungle!
Weekend woe for BT as telco struggles to restore service
Cloud unicorns are extinct so DiData cloud mess was YOUR fault
Applications need to be built to handle TITSUP incidents
BOFH: WHERE did this 'fax-enabled' printer UPGRADE come from?
Don't worry about that cable, it's part of the config
Stop the IoT revolution! We need to figure out packet sizes first
Researchers test 802.15.4 and find we know nuh-think! about large scale sensor network ops
DEATH by COMMENTS: WordPress XSS vuln is BIGGEST for YEARS
Trio of XSS turns attackers into admins
SanDisk vows: We'll have a 16TB SSD WHOPPER by 2016
Flash WORM has a serious use for archived photos and videos
prev story

Whitepapers

Why and how to choose the right cloud vendor
The benefits of cloud-based storage in your processes. Eliminate onsite, disk-based backup and archiving in favor of cloud-based data protection.
Forging a new future with identity relationship management
Learn about ForgeRock's next generation IRM platform and how it is designed to empower CEOS's and enterprises to engage with consumers.
Designing and building an open ITOA architecture
Learn about a new IT data taxonomy defined by the four data sources of IT visibility: wire, machine, agent, and synthetic data sets.
How to determine if cloud backup is right for your servers
Two key factors, technical feasibility and TCO economics, that backup and IT operations managers should consider when assessing cloud backup.
Reg Reader Research: SaaS based Email and Office Productivity Tools
Read this Reg reader report which provides advice and guidance for SMBs towards the use of SaaS based email and Office productivity tools.