Feeds

Fault tolerance in virtualised environments

Doesn’t get much more exciting than this

  • alert
  • submit to reddit

Top 5 reasons to deploy VMware with Tegile

You the Expert In this, our final Experts column in the current server series, our reader experts look at fault tolerance in Virtualised environments. As ever, we’re grateful to Reg reader experts Adam and Trevor for sharing their experience. They are joined by Intel’s Iain Beckingham and Freeform Dynamics’ Martin Atherton.

Server virtualisation has a number of benefits when it comes to fault tolerance but it also suffers from the ‘eggs-in-one-basket’ syndrome should a server go down. How can fault tolerance be built into the virtualised environment such that availability can be ensured?

Adam Salisbury
Systems Administrator

As server virtualisation technology has matured and become more widely adopted its fast becoming clear that we can now do far more work with far less resource, even on older servers. Machines that would have turned end-of-life this year can now run a handful of virtual servers with ease and the huge savings in terms of costs of equipment, space and power are now just being appreciated. But with any new technology come new challenges and new hazards and while some proclaim that virtualisation improves fault tolerance and provides increased redundancy does it really, or are we simple moving the risks and the points of failure?

I think one of the biggest developments in virtualisation in the last couple of years has to be the hypervisor management suites such as System Centre Virtual Machine Manager from Microsoft and vCenter from VMWare. Now we have flexibility like never before, we can convert physical servers to virtual ones and redeploy virtual server images onto physical hardware. We can also take images of live, production servers for absolute redundancy and availability or to test potential upgrades and even those tools can greatly improve the productivity of the staff charged with using them.

The freedom and versatility I, in an organisation currently engaged in a full-scale virtualisation project, now have in terms of being able to shift and consolidate workloads from one machine to another, one rack to another and one site to another is massive evolution from traditional server management.

Yet our core servers, domain controllers, email servers and database servers remain largely unchanged, for us those servers are in fact more fault tolerant as dedicated physical boxes as they would virtual ones.

Non-business-critical systems can and have been consolidated onto single servers for greater flexibility, where we once had a single server running four applications we now have four virtual servers running one application each. Now we can reboot our patch management server without affecting the AV server or the file and print server.

At some point however, additional servers will need to be procured to mirror these hypervisors as now if one component fails it can potentially affect all four servers rather than just one. Of course if the budget won’t stretch to buying mirrored or clustered servers there are already hosting companies willing to provide various solutions, with some offering to host replicas of on or off-site hypervisors.

There are vendors who specialise in fault tolerant hardware on which to run your hypervisors but these are expensive to buy and even more expensive to code software for, an arguably cheaper option would be to invest in a blade frame for perhaps the greatest resilience and even cheaper option than that is fault tolerant software.

Following the same principles as virtualisation software, fault tolerant software runs as an abstraction layer across multiple off the shelf servers to create a single seamless interface, two physical servers appear as one, hosting maybe half a dozen virtual servers. The software continuously scans for hardware faults and upon finding them, directs all I/O away from the failed component. Virtualisation of the virtualised it may be, but we’re still doing far more with far less.

Iain Beckingham
Manager of the Enterprise Technical Specialist team in EMEA, for Intel

Today, virtualisation is targeting mission critical servers distributed in a dynamic virtual infrastructure where loads are balanced within a cluster of servers and high availability and automated failover technologies exist. Intel is designing new innovative solutions that incorporate RAS (Reliability, Availability, and Serviceability) features.

RAS features are even more important with high-end systems where higher virtualization ratios can be achieved. Intel’s new Xeon® processor codenamed ‘Nehalem-EX’, will allow scaling above the traditional 4 sockets to systems to greater than 32 sockets.

With Nehalem-EX, Intel has invested extensively in incremental RAS capabilities to support high availability and data integrity, while minimizing maintenance cycles. All said, there are over twenty new RAS features in the Nehalem-EX platform that OEMs can use to build high-availability ‘mission-critical’ servers. Some of these features are built into the memory and processor interconnect and provide the ability to retry data transfers if errors are detected, or even to automatically heal data links with persistent errors to keep the system running until repairs are made.

Other capabilities like Machine Check Architecture (MCA) recovery take a page from Intel® Itanium® processor, RISC, and mainframe systems. MCA recovery supports a new level of cooperation between the processor and operating system or VMM to recover from data errors that cannot be corrected with more standard Error Correcting Code (ECC) and that would have caused earlier systems to shutdown.

Further capabilities enable OEMs to offer the support of hot addition or replacement of memory and CPUs by bringing new components online and migrating active workloads to them in the event that existing CPUs or memory indicate that they are failing.

Hopefully this gives you and idea of the RAS platform capabilities that, along with clustering failover and virtualisation VM failover configurations, will make Nehalem-EX systems go much further toward providing an even more reliable and robust platform for IT.

Intelligent flash storage arrays

More from The Register

next story
Fujitsu CTO: We'll be 3D-printing tech execs in 15 years
Fleshy techie disses network neutrality, helmet-less motorcyclists
Space Commanders rebel as Elite:Dangerous kills offline mode
Frontier cops an epic kicking in its own forums ahead of December revival
Intel's LAME DUCK mobile chips gobbled by CASH COW
Chipzilla won't have money-losing mobe unit to kick about anymore
First in line to order a Nexus 6? AT&T has a BRICK for you
Black Screen of Death plagues early Google-mobe batch
Ford's B-Max: Fiesta-based runaround that goes THUNK
... when you close the slidey doors, that is ...
Disturbance in the force lets phones detect gestures with Wi-Fi
These are the movement detection devices you're looking for
prev story

Whitepapers

Why cloud backup?
Combining the latest advancements in disk-based backup with secure, integrated, cloud technologies offer organizations fast and assured recovery of their critical enterprise data.
Forging a new future with identity relationship management
Learn about ForgeRock's next generation IRM platform and how it is designed to empower CEOS's and enterprises to engage with consumers.
Designing and building an open ITOA architecture
Learn about a new IT data taxonomy defined by the four data sources of IT visibility: wire, machine, agent, and synthetic data sets.
How to determine if cloud backup is right for your servers
Two key factors, technical feasibility and TCO economics, that backup and IT operations managers should consider when assessing cloud backup.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?