Feeds

HP: last Itanium man standing

Nehalem lives the dream

Boost IT visibility and business value

Comment Make no mistake: If Hewlett-Packard had not coerced chip maker Intel into making Itanium into something it never should have been, the point we have come to in the history of the server business would have got here a hell of a lot sooner than it has. But the flip side is that a whole slew of chip innovation outside of Intel might never have happened.

In this respect - and some might even be so cynical as to argue in only this respect - Intel's and HP's troubled chip development marriage and its resulting Itanium love child can be deemed a success. Without the threat of Itanium, which was never really fulfilled, perhaps IBM would have never knuckled down and put some money into decent Power chip development, which allowed the company to go from joke to dominance in the Unix server racket.

And without Intel relegating 64-bit processing to Itaniums and leaving Xeons to 32-bits, there would not have been a gap in which Advanced Micro Devices could leap and create the Opterons, which are the inspiration for the Nehalem family of processors that have put Intel back in the driver's seat when it comes to server CPUs and which give Intel a chance to have the kind of dominance in the data center that it enjoys on the desktop in the not-too-distant future.

None of us has the energy or the time to go over the multitude of sins that Intel and HP committed with the Itanium, ranging from the hubris of changing the instruction set to the stupidity of having too aggressive a deliver schedule in the early years to all but ignoring Itanium in the later years. But to understand what will happen to Itanium - and what will not happen to it - we have to review a little history.

The whole Itanium plan was predicated on all the major server vendors porting their platforms to the operating system, and as the 1990s came to a close and Itanium was still a threat rather than a disappointment, all the major OS makers swore their fealty to Itanium. That includes IBM with AIX, Sun Microsystems with Solaris, Hewlett-Packard with HP-UX, Santa Cruz Operation with OpenServer, Compaq with OpenVMS and Tru64 Unix, Microsoft with Windows, various emerging Linux players with their revs of that open source platform, and myriad proprietary and mainframe platforms (many of which help prop up the Itanium chip today).

The enthusiasm was more fear than anything else - fear of crossing Intel and suffering the consequences in the volume x86 business and fear of being left out on a big opportunity. And thus early forecasts, which had Itanium server sales kissing $40bn in 2001, look ridiculous now as we look back on them.

With HP finally getting around to launching machines based on the quad-core "Tukwila" Itanium 9300 processors, which made their debut back in early February with only HP and Super Micro committing publicly to using the new chip in systems. While none of the shippers of prior Itanium systems would bad-mouth the Tukwilas, the unwillingness of Unisys, Fujitsu, Silicon Graphics, Bull, NEC, and Hitachi to even admit they were working on Tukwila platforms was an astounding reversal of what server makers were saying back in 1996 and 1997 when it looked like Itanium would take over the world. And maybe Uranus, too.

IBM and Dell pulled the plug on their Itanium lines after only a few years, giving them about as much marketing effort as most politicians push for campaign finance reform. Which makes HP the John McCain of the Itanium world, I suppose, and perhaps a prisoner of war in a concentration camp it helped construct.

But hardware sales are driven by software sales, and software vendors don't write operating systems or application software for chips that don't look like they are going to hit their volumes or provide lots of margin to cover the work, as do mainframes and other high-end proprietary or Unix boxes.

So by the turn of the millennium, the IBM-Compaq-SCO triumvirate that was supposed to get the Monterey/64 converged AIX-OpenServer Unix running on Itanium got the work done and pulled the plug, and similarly Sun Microsystems, which completed an Itanium port of Solaris, sat on it. Microsoft supported Itanium for many years with Windows Server, but the company is always looking for a way to cut back on platforms when it comes to Windows Server. (Remember how Windows Server was supposed to run on x86, MIPS, Alpha, and Power platforms when it was launched in 1994?)

Microsoft relegated the Itanium version of Windows to a database engine with Windows Server 2008, and earlier this month Microsoft said that enough was enough and that Windows Server 2008 R2 was the last release of its operating system that would be supported on Itanium. El Reg broke the story late last year that Red Hat was going to kill off Itanium chip support in the Enterprise Linux 6 distro.

That leaves HP's HP-UX, OpenVMS, and NonStop operating systems, Novell's SUSE Linux Enterprise Server 11, and a handful of proprietary OSes from Europe and Japan on Itanium chips. With the exception of SLES, these customers have no alternatives, any more than IBM and Unisys mainframe or IBM OS/400 shops do. Which means as long as customers have difficulty in moving their applications and as long as Intel is willing to dedicate enough capacity to crank out the 400,000 or so Itanium chips needed (and can make money on the $1.5bn to $2.5bn in estimated annual Itanium chip revenue) to satisfy these customers, then Itanium will be around through the next eight-core "Poulson" and perhaps 16-core "Kittson" Itanium generations.

Assuming Itanium is on a two-year cycle, then there will be Itanium processors available for at least six to seven years, bolstering an Itanium server business that did around $5bn in sales in 2008, according to the Itanium Solutions Alliance. The analysts at Gartner have been cited saying they believe Itanium-based machines comprised 9.3 per cent of total worldwide server revenues in 2009, which works out to $4.07bn. That's an 18.5 per cent decline, which is not too bad compared to an overall server market that fell by 18.3 per cent to $43.1bn worldwide, according to Gartner.

The essential guide to IT transformation

Next page: Niche trap

More from The Register

next story
The Return of BSOD: Does ANYONE trust Microsoft patches?
Sysadmins, you're either fighting fires or seen as incompetents now
Microsoft: Azure isn't ready for biz-critical apps … yet
Microsoft will move its own IT to the cloud to avoid $200m server bill
Oracle reveals 32-core, 10 BEEELLION-transistor SPARC M7
New chip scales to 1024 cores, 8192 threads 64 TB RAM, at speeds over 3.6GHz
Docker kicks KVM's butt in IBM tests
Big Blue finds containers are speedy, but may not have much room to improve
US regulators OK sale of IBM's x86 server biz to Lenovo
Now all that remains is for gov't offices to ban the boxes
Gartner's Special Report: Should you believe the hype?
Enough hot air to carry a balloon to the Moon
Flash could be CHEAPER than SAS DISK? Come off it, NetApp
Stats analysis reckons we'll hit that point in just three years
Dell The Man shrieks: 'We've got a Bitcoin order, we've got a Bitcoin order'
$50k of PowerEdge servers? That'll be 85 coins in digi-dosh
prev story

Whitepapers

5 things you didn’t know about cloud backup
IT departments are embracing cloud backup, but there’s a lot you need to know before choosing a service provider. Learn all the critical things you need to know.
Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
Build a business case: developing custom apps
Learn how to maximize the value of custom applications by accelerating and simplifying their development.
Rethinking backup and recovery in the modern data center
Combining intelligence, operational analytics, and automation to enable efficient, data-driven IT organizations using the HP ABR approach.
Next gen security for virtualised datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.