Original URL: http://www.theregister.co.uk/2009/03/19/future_virtualization/

Sun banks future on multicore virtualization

Your data center needs a window

By Rik Myslewski

Posted in Virtualization, 19th March 2009 20:45 GMT

Multicore Expo Sever virtualization is at the top of most IT admins' to-do lists, but it's still has a long way to go in the development of management software, I/O technology, and storage technology before its full value can be realized.

Or so said Denis Sheahan, a Distinguished Engineer at Sun Microsystems, in a presentation at this week's Multicore Expo in Santa Clara, California.

Sun's Distinguished Engineer called server virtualization "a key technology to make the most of multicore systems." He should know, seeing as how his specialty is multicore CMT processors such as Sun's UltraSPARC T2 Plus, née Victoria Falls, which includes eight processing cores with eight threads each, plus the memory controller, I/O, and SMP support on the same chip.

Sheahan said that Sun will be moving to 16-core processors in the near future, which will result in their four-socket servers having a total of 512 threads. Needless to say, multicore programming challenges are no news to him.

"We've made a big bet on multicore," Sheahan said, adding that server virtualization is a technology on which Sun has "invested an awful lot of money."

According to a recent study published by Information Week, 65 per cent of major sites are implementing server virtualization now, 11 per cent are planning to implement it in the next 12 months, 13 per cent plan to implement it "at some time in the future," and 11 per cent have no plans for server virtulization.

Those 11 per cent puzzle Sheahan, who says - referring to server vendors - that "If you don't have a virtualization story today with your servers, you're not really in the game anymore."

As every IT admin knows, the benefits of server virtualization are legion and can reduce both capital and operating expenditures.

The efficiencies virtualization brings include higher levels of server utilization, especially as systems get higher core counts. Along with improved utilization levels come reduced hardware costs, lower power requirements, less need for cooling, space, and - and this may not please IT types - fewer admins required to keep a data center up and running.

Sheahan said that another need his customers have expressed is flexibility in the deployment of services throughout their data centers. "They really want to have a 'single pane of glass' where they can just manage the entire data center. They want to be able to move applications, they want to be able to provision servers really quickly, get as many systems up and running in as short a time as possible.

"The Nirvana for these customers," he said, "is that they would have a one-touch server, with the idea that they could have one person come into the data center, deploy the server, rack it up, power it on, and then everything else would be remote."

Customers also want a uniform management system, even for disparate systems - x86, Sparc, Power, whatever - all able to be managed from a single console that can move applications on-the-fly from one server to another.

Blame it on Web 2.0

According to Sheahan, the wildly variable workloads of Web 2.0 services are a perfect example of how centralized management can improve efficiency. "People have no clue how a [Web 2.0] workload is going to vary over a day," he said. "There may be periods of time of intense requirements for their infrastructure and then periods of time when it's idle."

Obviously, the ability to manage such workloads dynamically can bring big savings in power and cooling. When loads increase, you can throw more resources at them. When they drop, you can quickly take resources offline and redistribute applications among the remaining servers.

Single-point remote management of multiple OSes and instant dynamic control of workloads. Add to that single-point disaster recovery, and you've got a fine vision. But we're not quite there yet.

Sheahan pointed out, however, that since virtualization has only become pervasive during the past few years, "It's really only at the start of its life." However, "Given the pressures of the market, it's definitely going to be around for a long time to come - but it obviously has to go to the next phase."

According to Sheahan, that next phase will include the ability to better guarantee service-level agreements (SLAs) in cloud-based data centers. He also called virtualization of storage "the next big thing."

He sees virtualization being "driven down into the hardware." Coming are processor features that better enable virtualization, along with I/O virtualization such as the PCI Special Interest Group's I/0 Virtualization specification, often referred to simply as the PCI-SIG IOV.

These improvements will allow virtual machines to better share hardware - especially important, says Sheahan, in reference to I/O, which he claims can now put a 30 to 40 per cent hit on virtual-machine efficiency. A related improvement will be network-stack virtualization, which will allow you, according to Sheahan, to "split up a real, physical NIC and guarantee quality of service to virtual environments."

Another next step will be improved management. He described customers who rushed to virtualize their data centers - moving, for example, from 100 physical servers to 500 virtual servers - only to discover that management was missing. And by "management," he wasn't referring to the guys in the corner office who sign the purchase orders.

While Sheahan conceded that management tools exist, improvements are needed in virtual machine, hardware, and OS discovery; server provisioning; software and OS updating and patch application; and permissions management.

Sheahan sees a tremendous market opportunity for developers of server-management software that can work in heterogeneous environments. He put it simply and straightforwardly: "The company with the best management is the one that's going to win."

Also, he foresees a day when desktops will be virtualized as well, with various and sundry OSes - Mac, Windows, Linux, Solaris - running in virtualized environments on heterogeneous servers, accessed by individual desktops. Sun implements this through a technology that they purchased last February called VirtualBox.

Running any OS on a virtual machine accessed by any desktop has one interesting side benefit, said Sheahan: "Today, Mac OS and Mac laptops are becoming the de facto standard among developers." In a virtualized-desktop environment, developers can develop and test apps on Windows, Linux, Solaris, whatever, all without having to give up their preferred Mac hardware.

And for non-developers working in a virtualized-desktop environment, they'll have to accept something less pricey than a MacBook. After all, as Sheahan says, "Having dedicated hardware is not cost-effective anymore." ®