This article is more than 1 year old

Where does WAN acceleration by virtual machine actually get us?

A tour of the estate

Once these suppliers transition their software to an X86 hardware platform then the door is open to a further transition to running it as a VM on X86 hardware. The thing is, many people will realise, that a dedicated hardware appliance isn't busy all the time. It sits there, costing money and doing nothing when idle. Whereas, the accountants say, if it was a VM it would use a shared X86 server platform base and cost less when it was doing nothing.

It means though, that data centre management and admin has to change to reflect this. Just because everything is an app in VMwareland doesn't mean that existing application and server admin people can handle networking and storage system apps. The data centre admin organisation is going to have to get networking and storage smarts to cope.

We're also able to get to the point where larger enterprises could buy substantial unified bladed server, networking and storage systems, virtualised mainframes, from the likes of Cisco (UCS) and HP (Matrix) and, no doubt IBM, with one-throat-to-choke support. But smaller enterprises could end up heading towards the same mainframe-like approach if they go full throttle into server virtualisation and adopt VM-based storage and networking functionality as it becomes available on the "one box fewer to manage is goodness" principle.

The counter to this saving is that there are still many throats to choke when service gets interrupted. There is a reseller opportunity here, to unify the potentially fragmented and divisive support scene.

Another thought; if a customer has to buy specialised hardware and software to get a needed data centre facility, such as network routing, SAN fabric switching, filer storage or WAN optimisation then changing suppliers is not so easy. If that storage and WAN optimisation functionality is disaggregated into software running in a VM and a storage JBOD (just a bunch of disks) or router hardware, given an identity or functional personality by that software, then changing the software becomes easier. It's just, say for argument's sake, a SAN iSCSI block access layer between apps needing block data and a JBOD array providing the disks to do it, with standard interfaces either side.

So you could swap in another supplier's VM-based iSCSI block-access software product quite easily. Ditto filer. Ditto networking functionality, in theory, so long as you have a JBOD networking equivalent. We don't of course. Virtually all data storage outside of a server's DRAM is hard disk drive based. Fundamentally every storage array is a JBOD with ornamentation and uses substitutable hard disk drives from the same small set of HDD suppliers. The same isn't true for networking. There isn't the router or switch equivalent of a hard drive, more's the pity.

We won't see Cisco collapsing its networking functionality into a set of VMs and abandoning basic network hardware supply to the equivalent of Hitachi GST, Seagate and Western Digital any time soon. Sun's hardware and software strategy, with its open storage and open networking elements, was aiming in this direction though. Whether Oracle will continue that thrust remains to be seen. ®

More about

TIP US OFF

Send us news


Other stories you might like