How to wrestle storage in virtual server bear pits
Training server admins into server-and-storage admins
Deep Dive Server virtualisation is no longer just a topic for server admins; it also affects the management of storage and networking equipment. And the introduction of server virtualisation means that all of a sudden server operations people have to learn how to deal with storage.
This can create administrative complexity, as additional know-how has to be acquired, more procedural alignment between different experts becomes necessary and a lot of information has to be exchanged. This can compromise the advantages of server virtualisation, such as fast provisioning and higher IT agility. Consequently, vendors of storage systems and of virtualisation software work increasingly together to integrate virtual machine management (VMM) with array management in order to facilitate operations.
It is important to know that the integration level of a storage system with VMMs very much depends on the vendors on both sides. A closer look at which VMM vendors offer which interfaces and which storage vendors support them is required. Software and firmware versions of hardware and software also have an impact, so details matter.
The basic relationship to storage is a simple one, as the physical representation of virtual servers, aka virtual machines (VMs), are just files – files that are stored in most cases on external storage arrays.
Plug-ins allow VMM to get an insight into the storage system (eg, the free capacity that can be utilised for virtual machines or I/O behaviour). Such parameters can then be translated by more sophisticated VMM products to assist server admins during the provisioning process of virtual machines or to optimise the placement of virtual machines when environmental conditions change. VMM becomes more and more storage-aware, getting information about RAID levels, thin or thick provisioned capacity, or replication states.
This reduces the efforts of server operators collecting environmental parameters about the underlying storage systems, which are needed to assign the right infrastructure resources to a particular VM. Such an information transfer also increasingly includes the connectivity of the storage system, including technologies like multi-pathing.
The next step is to trigger VM-related actions directly on the storage system. For instance, the fastest way to create a new virtual machine is to copy one that has been already been developed. This copy function can be executed on the storage system itself instead of running it via a server, which saves performance.
The same is valid for the creation, deletion or replication of VMs. Closer co-operation with storage functionality also takes place in features for better disk utilisation. Server virtualisation can block a lot of capacity even if it is not really used. Disk space that has been claimed for server virtualisation is typically reported to the storage system as used.
Intelligent mechanisms like thin provisioning could not use such un-utilised capacity until recently. Now the storage device can be informed that the blocks are no longer used, resulting in better reporting of disk space consumption, which allows the reclaiming of these unused blocks. There is a downside of features like thin provisioning; overbooking of disk space. This can cause critical situations when real and not only booked capacity is exhausted.
Normally, storage admins receive warnings when certain thresholds have been hit, and server admins may still believe it has enough space for its VM operations. This is why newer VMM tools take over those warnings from the storage side giving more time for preventive actions, such as moving VMs to another array. The performance of virtual server environments is not only influenced by the available server hardware, but also by the storage hosting the VM files.
New developments allow a kind of tiering in a sense that the “quality classes” of LUNs are defined in terms of performance behavior by using faster disks. This allows server admins to allocate the VMs to the respective storage classes or profiles in order to balance speed and costs. The objective is to deliver the right service levels in terms of infrastructure requirements for an individual VM, without burdening server operators by requiring them to acquire in-depth storage know-how.
But the performance behavior of storage systems is heavily influenced by the overall workload. Once the capacity consumption of a storage system is growing towards its limits, you can typically experience performance degradation.
Moving VMs at array level
In such a case it may be good practice to distribute the VM files to other storage arrays. However, doing such migrations manually may entail a lot of effort and often requires significant downtime of the VM operations. Some VM software vendors offer in their recent product versions enhanced functionality for moving the VM files from one array to another, which automates the migration procedure and reduces disruption time. The promise is of live migration achieved by a mix of intelligent copy functions and transaction integrity during the migration phase. This again allows the implementation of tiering across storage arrays by moving VMs with a high workload to faster storage systems.
A lot of integration efforts have already been made to enable disaster recovery concepts for virtualised servers. By nature, server virtualisation eases the realisation of disaster recovery concepts, as it simply requires the files which represent a VM to be available on a failover site. This can be achieved with standard mirroring or replication functions. Nevertheless, a lot of orchestration between the server, storage and networking infrastructure is necessary, for which sophisticated software suites are available.
These are offered by software vendors, which offer server virtualisation (hypervisor) products like VMware, Hyper-V and Xen. In addition, more and more offerings can be seen on the market that combine virtualisation software suites with server, storage and network hardware, higher level orchestration software and logistical services in order to offer a complete solution. Typical names for this are converged infrastructures or infrastructure blocks.
All server virtualisation projects immediately have an impact on server and storage operations. To reduce the resulting complexities there is increasing functionality available in integrating VM management with storage management. It starts by making VMM storage-aware, but it also increasingly comprises the conducting of storage-related actions directly out of the virtual server management software. Finally, more and more holistic orchestration and complete infrastructure solutions are available.
The level of integration still differs a lot and requires detailed evaluations. Nevertheless, you are very much recommended to make use of those integrations, as the pay-off in operational efficiency is significant. ®
About the SNIA
The Storage Networking Industry Association (SNIA) is a not-for-profit global organisation, made up of some 400 member companies spanning virtually the entire storage industry. SNIA's mission is to lead the storage industry worldwide in developing and promoting standards, technologies, and educational services to empower organisations in the management of information. To this end, the SNIA is uniquely committed to delivering standards, education, and services that will propel open storage networking solutions into the broader market.
About SNIA Europe
The Storage Networking Industry Association (SNIA) Europe is dedicated to educating the market on the evolution and application of storage infrastructure solutions for the data centre by providing thought leadership and industry education focused on storage technologies and business value. For more information visit: www.snia-europe.org.
Sponsored: Customer Identity and Access Management