On the other ARM
The next generation of client devices seem to have a hankering for power-sipping ARM processors, and there is no reason in the world why the next generation of server platforms can't be based on multicore ARM chips coming from a true set of standards adopted by ARM licensees and software partners like Microsoft with Windows and a whole bunch of Linux vendors eager to chase a new market. Intel's dominance on the desktop lead to its dominance in the data center, after all. Seems like a trick that can be repeated. And it will be if there is to be any justice in the world and progress in server design,
If Intel wants to rally people behind a standard, it is to get them lined up against the ARM threat and to get into the loop to know what some of the biggest IT shops in the world - those 70 original members comprise $50bn in annual IT expenses, which is about 4 per cent of total IT spending globally, and it won't be long before it is hundreds of billions of dollars in collective spending and therefore influence - are thinking about clouds.
I think it would be a wonderful thing if the IT shops of the world got together and made IT vendors actually create standards. We never got a Unix standard, but we did get two different warring factions who at least knocked some commonality into the Unix stacks. The reason the open systems revolution took off in the late 1980s was because companies were sick to death of proprietary mainframes and minicomputers. And the joke is that the remaining proprietary minis (from IBM and Hewlett-Packard) and IBM mainframes have for the past decade supported enough of the POSIX and SPEC 1170 standards that you could, in fact, brand their proprietary operating systems as a variant of Unix with a bit more bit twiddling here and there.
Unix won the standards war, in the long run, but the market shifted to Windows, which has become a volume-based standard like x64 processors. Those who want portability today code in Java, PHP, and a bunch of other runtimes and programming languages, and those who don't care run .NET. About half the market cared about portability (if you look at server spending by operating system) in the 1990s, and about half still care today (if you add Linux and Unix together).
Nothing has changed, and the battle for openness continues. It is just happening in other parts of the stack and now out there on the cloud layer.
Whenever vendors talk about standards and proclaim they want to alleviate vendor lock-in, you have to be suspicious. Either they are stupid, they think you are stupid, or they are liars. Maybe they are two of those, and perhaps all three in very rare cases. (Believe me, some vendors believe their own lies.) Red Hat, for instance, is promoting its Delta Cloud standards not just because it is a good guy wearing a white – er, red – hat in the IT market, but because if there are open standards then it will be easier for companies to move from VMware's pricey virtualization and cloud management products to Red Hat's cheaper ones.
Always look for the control points when trying to figure out a competitive landscape. When IT vendors in the systems racket (they weren't even called servers back then) had their own chips and operating systems - and usually multiple operating systems on the same platform - something like a Unix standard could at least be proposed because the lock-in was deeper down in the stack, at the instruction level inside the chip and in the legacy applications. Under those conditions, you could get vendors to agree on a set of APIs to make applications more portable - and mainly because the software companies that drove your system sales were insisting on it and the IT shops that bought their code were as well.
But today, the lock-in when it comes to servers is embodied in all the stuff that wraps around the x64 platform – the form factor of the blade server, the preferred hypervisors and operating systems on the platform, and the system management tools that are tuned for the box. The lock-in is weaker, but this is precisely the kind of thing that server makers are not going to want to part with. It's all that they have left. Ditto for server hypervisor vendors, who are now behaving just like physical server makers from days gone by, with their incompatible management consoles, incompatible VM formats, and so on. They not only want lock in. Their business models depend on putting off standards as long as possible.
This, of course, puts them in direct contention with the ODCA. And perhaps with some of the most powerful CIOs in the world, who want exactly what Intel is billing as "Cloud Independence Day." These CIOs are lost in their server stacks and application silos and they cannot do their jobs of doing more IT for less money each year with calcified systems.
They want exactly what Skaugen said they want: federated private and public clouds that allow workloads to move seamlessly between the two, and they don't want to have to worry about security issues and vendor lock in. They most certainly want a lot more automation in their data centers, and they want their storage pools and data centers to manage themselves as conditions change, just like our bodies do to keep us alive. In fact, I would argue that the job of managing pools of virtual servers with myriad virtual machines and various policies governing energy consumption and utility pricing for infrastructure and someday software is so complex that is can only be automated for it to work properly.
That means fewer expenses for managing the infrastructure (sorry, system administrators, but you are fired again) while increasing flexibility. The other Intel idea of client-aware services being pumped out of clouds is interesting, too. Why not have the best rendering of an application, depending on your device and using back-end resources in the data center if you have a crap graphics card?
But wanting things to be standards doesn't make them standards. If the CIOs representing something more like half of the hardware and software spending get behind the ODCA effort, then it may actually get some teeth and be able to compel some standards for cloud interoperability like Intel is suggesting. And the only benefit Intel will get out of it will be the inside dope as things are being discussed by ODCA members. Every standard that is good for Intel will be just as good for Opteron, Sparc, Power, and even ARM. ®
"We never got a Unix standard"
"We never got a Unix standard"
Well, not apart from X/Open, Single UNIX Specification, XPG.4, POSIX, SPEC1170, and such.
Of course Microsoft and their network of certified Microsoft dependent "business partners" around the world made sure that these vendor-independent open standards were never allowed to be effectively used in fair and open procurement processes.
Oh, and it's not x64, it's AMD64. Intel invented their own 64bit architecture, IA64, which they touted as "industry standard" 64bit computing. When AMD64 came out and IA64 inevitably failed to dominate, Intel had to rapidly clone AMD's 64bit architecture. So it's not x64, it's AMD64. If that makes you puke too much, it's maybe x86-64. But it's not x64. Other 64bit architectures are available.
No love for software hippies?
'Whenever someone starts waving "standards," it is always a prelude to war.' - are you sure about the 'always' here? I mean, there are software hippies like the nonprofit Apache and Mozilla people, and they both seem to be pretty keen on standards.
What point? These were not multiple conflicting standards, really each built on the last, becoming more specific in defining things the previous spec did not. UNIX is actually quite well defined and has been for decades.
Secondly, I must laugh at Windows on ARM. Back when Microsoft *did* try to port NT to other platforms, they were quite second-class citizens. How many companies do you think will port Windows software to another platform now, and if the ARM had an x86 emulator how fast do you think it'd be? If running ARM Linux'll be the way to go... I've run Linux distros in the past on PowerPC, PA-RISC, Alpha, and MIPS and even on a desktop, everything was there except flash (and gnash might work well enough for that). Server? Your software will all be there, and native -- several distros have ARM ports now, and if ARM servers start shifting, it's just won't be a big undertaking for the other distros to bring up their distros on ARM (since it's an already supported platform by the kernel, libc, and compiler, it's just a matter of recompiling everything, which they already do regularly anyway.) And apparently there are not ARM hypervisors as well.
Anyhow... lets see how this goes. Will it persuade these VM companies to truly be interoperable, or will it be just yet another virtual machine disk format, without specifying enough APIs and such to actually allowing moving a VM from one product to the next without having to program for each one specifically? Or will nothing come of it? Time will tell I guess.