The major driver behind our skepticism is the relentless desire for more compute power demonstrated by customers again and again. The only time the server market really crawls to a halt is when there's no money to spend on new gear because of a broad economic slowdown. Given the moderately healthy state of the worldwide economy, and surging demand from developing regions, it seems unreasonable to us to expect that customers will not find a need for the multi-core chips being thrown at them by Intel, AMD, Sun, IBM, Fujitsu and others.
Evidence for this horsepower pursuit can be found in the healthy hardware-based acceleration market. There's a resurgent desire for things such as FPGAs and GPGPUs (general purpose GPUs) that can speed specific workloads.
In addition, we're seeing a rise in the creation of so-called mega data centers by service providers. These customers may seem nichey, but they consume an awful lot of hardware and have yet to show a voracious appetite for virtualization software. They tend to buy cheap, lower-end systems and to run one application per box or to spread software across an entire data center.
Companies embracing the software as a service model seem destined to follow these service providers, opting for lighter weight boxes that can chew through threads and data.
And that brings us to the last point.
The x86 virtualization cheerleading seems to hinge on the ideas that the software from VMware, Microsoft and others will run well on increasingly cored chips and that customers will stomach paying for virtualization software and for the gobs of memory needed to power the code. VMware, for example, remains a bit clunky and isn't likely to enjoy any help from rising GHz, since GHz aren't rising. The company is working directly with chip makers on hardware hooks that improve performance but how will this work stack up against multi-threaded code running alone on a single box?
Customers will no doubt opt to place certain software loads on virtualized systems, while placing applications that need to fly on their own systems. So, in that sense, virtualization must have an impact an overall sales.
But the models being presented this week seem to discount altogether the rising SaaS model and the idea that coders will soon find novel, demanding uses for multi-core x86 processors.
The mainframe arena - the place where VMware pinched its genius - has survived virtualization for a long while, as has the Unix market. Each segment, including the x86 market, has its unique attributes, making apples v. apples comparisons tough. Still, customer demand for more horsepower serves as constant across all three markets, and we suspect it will keep overall demand for servers high, despite virtualization code. ®
I'll caveat this by saying that I work for VMware. The opinions expressed here are my own and do not reflect the opinions of VMware.
I find the comments and the article rather amuzing. People talking about how all of their apps are multi-threaded and so they should run on physical rather than virtual machines. Really? I would like to meet you. Most of the apps running in datacenters (large and small) are single-threaded. Writing a multi-threaded app is rather difficult and something not taught until you get towards the latter part of your masters or PHd in computer science. Take a look back at your x86 apps in your datacenter (not the stuff running on Solaris on SPARC or your P or Z series) but the actual x86 stuff. How much of it is truly multi-threaded. For the stuff that is does it actually scale linearly? Most don't. And for that matter almost all of the virtualization solutions for x86 today have SMP support for the virtual machines.
Notice I also talk about x86. That's the market this article is talking about. For those of you talking about running everything in LPARs on the mainframe or in Solaris containers - great - if your apps will actually run there. Again, in an x86 dominated datacenter space you're going to have a tough time getting your apps to run on the mainframe or actually be supported there.
Then there's the talk about how everything should just run in chroot or some other esoteric Linux solution. If you're a 100% Linux shop that just may work for you. What about all of your Windows stuff or your NetWare stuff? You may laugh but that's still running in your datacenter. And if chroot and containers and other Linux solutions that have been around for a long time were really that great and could be operationalized then why haven't you been running your datacenter like that already? Hmmm....
My last comment is for the performance junkies. You know who you are. The people that say virtualization provides too much overhead or stacking 10 apps on a single server makes things run 10 times as slow. I've done countless performance collections (well over 2,000) on datacenters around the world and the results are almost identical. over 90% of the x86 apps in the datacenter run under 10% utilization. So why do you care if the virtualization solution runs at 90 or even 80% of native when your app only runs at 10% of native. Perhaps we need to do better at teaching math in the schools. And this only gets worse as you start adding more cores and processing power to servers. Your apps use even less.
The biggest thing holding more effecient datacenters back these days are ignorant posts like the ones found here. Virtualization has been around for over 30 years thanks to IBM. It's not something new to be afraid of. We're simply taking tried and true solutions to the x86 space. Start educating yourselves and think about what you write before you post.
Virtualization for easy of management rather than consolidation
A recent Slashdot post covered an article at Interop News by Jeff Gould called "On the rPath to virtual containerization" . Gould argues that virtualisation's the ease of deployment, migration, backup, etc. will actually *increase* the demand for server hardware over the long term. He offers Intel's recent investment in VMware as support for this view. He then goes on to discuss rPath, which allows ISVs to build full-stack software appliances, all the way done to the OS. rPath uses a trimmed down Linux (as small as 50 MB), significantly reducing the attack surface of the final product as well as the maintenance overhead. rPath is run by Billy Marshall, with RPM author Erik Troan as CTO; both ex-Red Hatters.
Whether virtualization can actually increase the demand for servers or not, I agree with Ashlee that the demand for servers will not go down. I think the consolidation-by-virtualization trend is having a short-term impact on sales, but that will only last as long as there is inefficiency in the data centre to exploit. After that, unless the overall demand for more computing power is stopped -- and I can't see why it would -- server sales are sure to pick up again.
Fazal - The only people that talk about DLL Hell are the ones that haven't learned anything about Windows since some time mid last decade. Multiple apps on one Windows box might be unstable (I've not had stability problems on Windows since 2000sp3 - I've pulled 3+ month uptimes on my desktop machine running 2003, interrupted by power outages and driver updates) but more than one application running at a time doesn't really affect the DLLs. They really haven't been a problem since MS moved off the 9x kernel.
While Zones and VMWare do sort of similar things, what I've read of the two leads me to believe that they aren't really worth comparing - sort of an apples and martians situation. Plus, who wants to use Solaris? I've never sworn at an OS so hard in my life - and I have to use OS X on a daily basis.