Original URL: https://www.theregister.com/2010/10/27/intel_open_data_center_alliance_comment/

Intel plays Switzerland in the cloud wars

Who will bear the ARM 'standard'?

By Timothy Prickett Morgan

Posted in Channel, 27th October 2010 20:55 GMT

Comment Whenever someone starts waving "standards," it is always a prelude to war. With the launch of the Open Data Center Alliance today by 70 IT organizations (some of whom are IT suppliers), Intel is trying to position itself as the neutral player in the coming cloud wars. Switzerland benefited by being the bankers for warring countries, and Intel seeks to benefit by maintaining and extending its dominance in the server racket.

In the computer industry, a standard is a bit of a double-edged sword. It is meant to be some kind of peace offering, a compromise between warring factions who argue over how different hardware and software components plug into each other or how people and devices talk to software running on a system (that's the cooperative edge. But at the same time, a standard is brandished like a weapon (that's the competitive edge) to wound other players and chase them away from the pile of money in the data center or on the desktop.

No one ever argues against standards, of course. It is a bit like arguing against world peace or trying to persuade everyone that they are not entitled to life, liberty, and the pursuit of happiness. But standards in the computer business - real standards, developed cooperatively, endorsed by vendors, and supported by the budget dollars of end users - are hard to come by.

More times than not, the standard is set by the last man standing in a particular market. That's how we got TCP/IP instead of myriad other network protocols. That's why you can't kill Ethernet no matter how hard you try. And that's also why there still is not a Unix standard, or blade form factor server standards, or even virtual machine standards - and there never will be. The "might makes right" aspects of standards is why people refer to x64-based servers as "industry standard servers" when what they really mean to say is "volume servers that have crushed most other platforms out of existence or driven them into legacy status."

In fact, at the launch of the Open Data Center Alliance in San Francisco today, Kirk Skaugen, general manager of Intel's Data Center Group – which makes chips and chipsets for PCs and servers – used the ramp of the x86 and now the x64 server as proof that Intel knew how to create standards and was therefore justified in being the one and only technical advisor (and non-voting member) of the ODCA.

Skaugen walked down memory lane, reminding everyone that in the early 1990s when the Pentium Pro chip was announced, heralding a new era in server computing, Intel had a very tiny share of the server racket, but by 1995, the market was consuming 1 million boxes with Intel having under 10 per cent share. By 2000, thanks to the dot-com buildout and the ascendancy of Linux for Webby infrastructure and supercomputing, the market had grown to 4 million units.

And with Intel bringing together operating system and other software and hardware players, the market is now at 7 million units according to Skaugen (more like 8 million, really, until the virtualization blowback kicks in) with Intel having the vast majority of shipments. It averages somewhere between 96 and 97 per cent most quarters. Skaugen said that nine out of ten systems running on clouds today used a Xeon processor, and brought up the future "Sandy Bridge" Xeons and referred to them as the "foundation of the next-generation cloud."

What Skaugen did not say is that this decade and a half of x86 and x64 server sales created the problem that required server virtualization in the first place. Had proper Unix operating systems and sophisticated workload management tools come to market along with ever-improving Intel chips, then server utilization would have been a lot higher and Intel a lot poorer.

Companies may have saved money on iron by moving off proprietary mainframes and minis, and then Unix boxes, but they ended up paying for it with low server utilization, soaring data center costs, high software licensing fees, and so on. And now, with server virtualization, companies are trying to get back to the good-old-days of a centralized and virtualized utility to support applications. (Don't get me wrong. Unix needed virtualization, too, as did the proprietary OS/400 and OpenVMS operating systems. They don't run flat out all day, either. But they did a lot better than Linux and Windows).

The fact that x64 chips dominate the server chip market does not mean that they're a standard, however much Intel might want it to be so. There's no open spec for chip and system design. There is no community steering group. Sure, we have USB and PCI-Express peripheral standards, memory module standards, disk form factor and rack form factor standards, and all kinds of other standards. But there is no way for Intel's or AMD's customers, partners, and rivals to have a say in the future of the x64 server platform. Did Intel ask you what you wanted in its next chips? No. You waited to see what it would do next, just like the rest of us.

On the other ARM

The next generation of client devices seem to have a hankering for power-sipping ARM processors, and there is no reason in the world why the next generation of server platforms can't be based on multicore ARM chips coming from a true set of standards adopted by ARM licensees and software partners like Microsoft with Windows and a whole bunch of Linux vendors eager to chase a new market. Intel's dominance on the desktop lead to its dominance in the data center, after all. Seems like a trick that can be repeated. And it will be if there is to be any justice in the world and progress in server design,

If Intel wants to rally people behind a standard, it is to get them lined up against the ARM threat and to get into the loop to know what some of the biggest IT shops in the world - those 70 original members comprise $50bn in annual IT expenses, which is about 4 per cent of total IT spending globally, and it won't be long before it is hundreds of billions of dollars in collective spending and therefore influence - are thinking about clouds.

I think it would be a wonderful thing if the IT shops of the world got together and made IT vendors actually create standards. We never got a Unix standard, but we did get two different warring factions who at least knocked some commonality into the Unix stacks. The reason the open systems revolution took off in the late 1980s was because companies were sick to death of proprietary mainframes and minicomputers. And the joke is that the remaining proprietary minis (from IBM and Hewlett-Packard) and IBM mainframes have for the past decade supported enough of the POSIX and SPEC 1170 standards that you could, in fact, brand their proprietary operating systems as a variant of Unix with a bit more bit twiddling here and there.

Unix won the standards war, in the long run, but the market shifted to Windows, which has become a volume-based standard like x64 processors. Those who want portability today code in Java, PHP, and a bunch of other runtimes and programming languages, and those who don't care run .NET. About half the market cared about portability (if you look at server spending by operating system) in the 1990s, and about half still care today (if you add Linux and Unix together).

Nothing has changed, and the battle for openness continues. It is just happening in other parts of the stack and now out there on the cloud layer.

Whenever vendors talk about standards and proclaim they want to alleviate vendor lock-in, you have to be suspicious. Either they are stupid, they think you are stupid, or they are liars. Maybe they are two of those, and perhaps all three in very rare cases. (Believe me, some vendors believe their own lies.) Red Hat, for instance, is promoting its Delta Cloud standards not just because it is a good guy wearing a white – er, red – hat in the IT market, but because if there are open standards then it will be easier for companies to move from VMware's pricey virtualization and cloud management products to Red Hat's cheaper ones.

Always look for the control points when trying to figure out a competitive landscape. When IT vendors in the systems racket (they weren't even called servers back then) had their own chips and operating systems - and usually multiple operating systems on the same platform - something like a Unix standard could at least be proposed because the lock-in was deeper down in the stack, at the instruction level inside the chip and in the legacy applications. Under those conditions, you could get vendors to agree on a set of APIs to make applications more portable - and mainly because the software companies that drove your system sales were insisting on it and the IT shops that bought their code were as well.

But today, the lock-in when it comes to servers is embodied in all the stuff that wraps around the x64 platform – the form factor of the blade server, the preferred hypervisors and operating systems on the platform, and the system management tools that are tuned for the box. The lock-in is weaker, but this is precisely the kind of thing that server makers are not going to want to part with. It's all that they have left. Ditto for server hypervisor vendors, who are now behaving just like physical server makers from days gone by, with their incompatible management consoles, incompatible VM formats, and so on. They not only want lock in. Their business models depend on putting off standards as long as possible.

This, of course, puts them in direct contention with the ODCA. And perhaps with some of the most powerful CIOs in the world, who want exactly what Intel is billing as "Cloud Independence Day." These CIOs are lost in their server stacks and application silos and they cannot do their jobs of doing more IT for less money each year with calcified systems.

They want exactly what Skaugen said they want: federated private and public clouds that allow workloads to move seamlessly between the two, and they don't want to have to worry about security issues and vendor lock in. They most certainly want a lot more automation in their data centers, and they want their storage pools and data centers to manage themselves as conditions change, just like our bodies do to keep us alive. In fact, I would argue that the job of managing pools of virtual servers with myriad virtual machines and various policies governing energy consumption and utility pricing for infrastructure and someday software is so complex that is can only be automated for it to work properly.

That means fewer expenses for managing the infrastructure (sorry, system administrators, but you are fired again) while increasing flexibility. The other Intel idea of client-aware services being pumped out of clouds is interesting, too. Why not have the best rendering of an application, depending on your device and using back-end resources in the data center if you have a crap graphics card?

But wanting things to be standards doesn't make them standards. If the CIOs representing something more like half of the hardware and software spending get behind the ODCA effort, then it may actually get some teeth and be able to compel some standards for cloud interoperability like Intel is suggesting. And the only benefit Intel will get out of it will be the inside dope as things are being discussed by ODCA members. Every standard that is good for Intel will be just as good for Opteron, Sparc, Power, and even ARM. ®