Feeds

Fault tolerance in virtualised environments

Doesn’t get much more exciting than this

  • alert
  • submit to reddit

7 Elements of Radically Simple OS Migration

You the Expert In this, our final Experts column in the current server series, our reader experts look at fault tolerance in Virtualised environments. As ever, we’re grateful to Reg reader experts Adam and Trevor for sharing their experience. They are joined by Intel’s Iain Beckingham and Freeform Dynamics’ Martin Atherton.

Server virtualisation has a number of benefits when it comes to fault tolerance but it also suffers from the ‘eggs-in-one-basket’ syndrome should a server go down. How can fault tolerance be built into the virtualised environment such that availability can be ensured?

Adam Salisbury
Systems Administrator

As server virtualisation technology has matured and become more widely adopted its fast becoming clear that we can now do far more work with far less resource, even on older servers. Machines that would have turned end-of-life this year can now run a handful of virtual servers with ease and the huge savings in terms of costs of equipment, space and power are now just being appreciated. But with any new technology come new challenges and new hazards and while some proclaim that virtualisation improves fault tolerance and provides increased redundancy does it really, or are we simple moving the risks and the points of failure?

I think one of the biggest developments in virtualisation in the last couple of years has to be the hypervisor management suites such as System Centre Virtual Machine Manager from Microsoft and vCenter from VMWare. Now we have flexibility like never before, we can convert physical servers to virtual ones and redeploy virtual server images onto physical hardware. We can also take images of live, production servers for absolute redundancy and availability or to test potential upgrades and even those tools can greatly improve the productivity of the staff charged with using them.

The freedom and versatility I, in an organisation currently engaged in a full-scale virtualisation project, now have in terms of being able to shift and consolidate workloads from one machine to another, one rack to another and one site to another is massive evolution from traditional server management.

Yet our core servers, domain controllers, email servers and database servers remain largely unchanged, for us those servers are in fact more fault tolerant as dedicated physical boxes as they would virtual ones.

Non-business-critical systems can and have been consolidated onto single servers for greater flexibility, where we once had a single server running four applications we now have four virtual servers running one application each. Now we can reboot our patch management server without affecting the AV server or the file and print server.

At some point however, additional servers will need to be procured to mirror these hypervisors as now if one component fails it can potentially affect all four servers rather than just one. Of course if the budget won’t stretch to buying mirrored or clustered servers there are already hosting companies willing to provide various solutions, with some offering to host replicas of on or off-site hypervisors.

There are vendors who specialise in fault tolerant hardware on which to run your hypervisors but these are expensive to buy and even more expensive to code software for, an arguably cheaper option would be to invest in a blade frame for perhaps the greatest resilience and even cheaper option than that is fault tolerant software.

Following the same principles as virtualisation software, fault tolerant software runs as an abstraction layer across multiple off the shelf servers to create a single seamless interface, two physical servers appear as one, hosting maybe half a dozen virtual servers. The software continuously scans for hardware faults and upon finding them, directs all I/O away from the failed component. Virtualisation of the virtualised it may be, but we’re still doing far more with far less.

Iain Beckingham
Manager of the Enterprise Technical Specialist team in EMEA, for Intel

Today, virtualisation is targeting mission critical servers distributed in a dynamic virtual infrastructure where loads are balanced within a cluster of servers and high availability and automated failover technologies exist. Intel is designing new innovative solutions that incorporate RAS (Reliability, Availability, and Serviceability) features.

RAS features are even more important with high-end systems where higher virtualization ratios can be achieved. Intel’s new Xeon® processor codenamed ‘Nehalem-EX’, will allow scaling above the traditional 4 sockets to systems to greater than 32 sockets.

With Nehalem-EX, Intel has invested extensively in incremental RAS capabilities to support high availability and data integrity, while minimizing maintenance cycles. All said, there are over twenty new RAS features in the Nehalem-EX platform that OEMs can use to build high-availability ‘mission-critical’ servers. Some of these features are built into the memory and processor interconnect and provide the ability to retry data transfers if errors are detected, or even to automatically heal data links with persistent errors to keep the system running until repairs are made.

Other capabilities like Machine Check Architecture (MCA) recovery take a page from Intel® Itanium® processor, RISC, and mainframe systems. MCA recovery supports a new level of cooperation between the processor and operating system or VMM to recover from data errors that cannot be corrected with more standard Error Correcting Code (ECC) and that would have caused earlier systems to shutdown.

Further capabilities enable OEMs to offer the support of hot addition or replacement of memory and CPUs by bringing new components online and migrating active workloads to them in the event that existing CPUs or memory indicate that they are failing.

Hopefully this gives you and idea of the RAS platform capabilities that, along with clustering failover and virtualisation VM failover configurations, will make Nehalem-EX systems go much further toward providing an even more reliable and robust platform for IT.

Build a business case: developing custom apps

More from The Register

next story
Nice computers don’t need to go to the toilet, says Barclays
Bad computers might ask if you are Sarah Connor
4K video on terrestrial TV? Not if the WRC shares frequencies to mobiles
Have your say with Ofcom now, before Freeview becomes Feeview
PEAK LANDFILL: Why tablet gloom is good news for Windows users
Sinofsky's hybrid strategy looks dafter than ever
YES, iPhones ARE getting slower with each new release of iOS
Old hardware doesn't get any faster with new software
You didn't get the MeMO? Asus Pad 7 Android tab is ... not bad
Really, er, stands out among cheapie 7-inchers
Apple winks at parents: C'mon, get your kid a tweaked Macbook Pro
Cheapest models given new processors, more RAM
VMware builds product executables on 50 Mac Minis
And goes to the Genius Bar for support
Leaked Windows Phone 8.1 Update specs tease details of Nokia's next mobes
New screen sizes, dual SIMs, voice over LTE, and more
Microsoft stands on shore as tablet-laden boat sails away
Brit buyers still not falling for Windows' charms
prev story

Whitepapers

7 Elements of Radically Simple OS Migration
Avoid the typical headaches of OS migration during your next project by learning about 7 elements of radically simple OS migration.
Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
Consolidation: The Foundation for IT Business Transformation
In this whitepaper learn how effective consolidation of IT and business resources can enable multiple, meaningful business benefits.
Solving today's distributed Big Data backup challenges
Enable IT efficiency and allow a firm to access and reuse corporate information for competitive advantage, ultimately changing business outcomes.
A new approach to endpoint data protection
What is the best way to ensure comprehensive visibility, management, and control of information on both company-owned and employee-owned devices?