This article is more than 1 year old

Xen hardens up with zero-footprint guest introspection code

Anti-virus software can now peer into VMs running on open source hypervisor

The Xen Project's had a nasty run with security of late, thanks to a run of five bad bugs, but has revealed plans to improve matters in the forthcoming version 4.6 of its open-source hypervisor.

The Project's new weapon is called libbdvmi and addresses the fact that running security software on a guest virtual machine can be tricky. For one thing, security software adds an overhead that's not welcome, and that overhead can get nasty when a whole bunch of VMs go looking for their new virus signatures at the same time. For another, advanced threats are very good at hiding from security software. Then there's the fact that security software isn't always tuned to understand the vagaries of virtualisation, so a virtual machine doesn't fully “comprehend” that its resources belong to its physical host.

The Xen Project's therefore enhanced its “guest introspection” code. Guest introspection is a method of interrogating the memory a VM occupies. If you can do that, you can then use security software running outside the VM to check if there are any nasties resident in RAM, or to look for signs of other oddities that signal danger.

Of course you don't want that process of sniffing VM RAM to impose an overhead either, which is why in 2014 the Xen project embarked on a project to create “zero overhead” guest introspection. Said code, libbdvmi, is now ready to roll. But not ready to use: you'll need Xen 4.6 (due to arrive no later than October 9th) to make it work. Releasing the code now, as the Project did today, means the curious or those who wish to commercialise it for use in security products can now start exploring how to do so.

Xen's had similar features for a while, thanks to LibVMI. This time around, the folks at the Project think they've done a better job of it by ensuring libbdvmi touches VMs even more lightly, thanks to the following features.

  • it only connects an introspection logic component to the guest, leaving on-the-fly OS detection and decision-making to it;
  • provides a xenstore-based way to know when a guest has been started or stopped;
  • has as few external library dependencies as possible – to that end, where LibVMI has used Glib for caching, we’ve only used STL containers, and the only other dependencies are libxenctrl and libxenstore;
  • allows mapping guest pages from userspace, with the caveat that this implies mapping a single page for writing, where LibVMI’s buffer-based writes would possibly touch several non-contiguous pages;
  • it works as fast as possible – we get a lot of events, so any unnecessary userspace / hypervisor context switches may incur unacceptable penalties (this is why one of our first patches had vm_events carry interesting register values instead of querying the quest from userspace after receiving the event);
  • last but not least, since the Xen vm_event code has been in quite a bit of flux, being able to immediately modify the library code to suit a new need, or to update the code for a new Xen version, has been a bonus to the project’s development pace.

libbdvmi is substantially based on work conducted by Bitdefender, with help from Intel.

libbdvmi can be found here on GitHub if you fancy taking it for a whirl. ®

More about

TIP US OFF

Send us news


Other stories you might like