Sun banks future on multicore virtualization
Your data center needs a window
Blame it on Web 2.0
According to Sheahan, the wildly variable workloads of Web 2.0 services are a perfect example of how centralized management can improve efficiency. "People have no clue how a [Web 2.0] workload is going to vary over a day," he said. "There may be periods of time of intense requirements for their infrastructure and then periods of time when it's idle."
Obviously, the ability to manage such workloads dynamically can bring big savings in power and cooling. When loads increase, you can throw more resources at them. When they drop, you can quickly take resources offline and redistribute applications among the remaining servers.
Single-point remote management of multiple OSes and instant dynamic control of workloads. Add to that single-point disaster recovery, and you've got a fine vision. But we're not quite there yet.
Sheahan pointed out, however, that since virtualization has only become pervasive during the past few years, "It's really only at the start of its life." However, "Given the pressures of the market, it's definitely going to be around for a long time to come - but it obviously has to go to the next phase."
According to Sheahan, that next phase will include the ability to better guarantee service-level agreements (SLAs) in cloud-based data centers. He also called virtualization of storage "the next big thing."
He sees virtualization being "driven down into the hardware." Coming are processor features that better enable virtualization, along with I/O virtualization such as the PCI Special Interest Group's I/0 Virtualization specification, often referred to simply as the PCI-SIG IOV.
These improvements will allow virtual machines to better share hardware - especially important, says Sheahan, in reference to I/O, which he claims can now put a 30 to 40 per cent hit on virtual-machine efficiency. A related improvement will be network-stack virtualization, which will allow you, according to Sheahan, to "split up a real, physical NIC and guarantee quality of service to virtual environments."
Another next step will be improved management. He described customers who rushed to virtualize their data centers - moving, for example, from 100 physical servers to 500 virtual servers - only to discover that management was missing. And by "management," he wasn't referring to the guys in the corner office who sign the purchase orders.
While Sheahan conceded that management tools exist, improvements are needed in virtual machine, hardware, and OS discovery; server provisioning; software and OS updating and patch application; and permissions management.
Sheahan sees a tremendous market opportunity for developers of server-management software that can work in heterogeneous environments. He put it simply and straightforwardly: "The company with the best management is the one that's going to win."
Also, he foresees a day when desktops will be virtualized as well, with various and sundry OSes - Mac, Windows, Linux, Solaris - running in virtualized environments on heterogeneous servers, accessed by individual desktops. Sun implements this through a technology that they purchased last February called VirtualBox.
Running any OS on a virtual machine accessed by any desktop has one interesting side benefit, said Sheahan: "Today, Mac OS and Mac laptops are becoming the de facto standard among developers." In a virtualized-desktop environment, developers can develop and test apps on Windows, Linux, Solaris, whatever, all without having to give up their preferred Mac hardware.
And for non-developers working in a virtualized-desktop environment, they'll have to accept something less pricey than a MacBook. After all, as Sheahan says, "Having dedicated hardware is not cost-effective anymore." ®
Sponsored: Hyper-scale data management