Unix in the UK: Mission critical!
Report shows cost worries among IT managers
Hewlett-Packard has blades on the brain for both "industry standard" and "mission critical" servers, but IT managers in the United Kingdom seem to be more worried about the cost of their mission critical platforms, generally Unix boxes, according to a report released by Coleman Parkes Research.
HP and its public relations firm, Burson-Marsteller, commissioned Coleman Parkes, a UK-based IT research firm, to chat up CIOs and other techies running data centers in the country and find out what they were and were not worried about when it came to their mission critical systems.
('Mission critical' is the term that HP and other vendors apply to their midrange and high-end Unix or proprietary machines, while "industry standard" is the term that Compaq and then HP and Dell use to describe x64-based machinery. Depending how large your company is and what your data center looks like, a rack of so-called industry standard servers can be your mission critical boxes. So the terms are not as useful as HP's marketeers think.)
While HP didn't say this, the purpose of the study done in the UK by Coleman Parkes was done to help gather information to help the server maker push its new line of Itanium-based Integrity blade servers, which were announced on April 27. HP has been talking about its strategy to "blade everything" for five years now, and with the quad-core "Tukwila" Itanium 9300 machines, blades are your only option excepting the two-socket rx2800 i2 rack server.
HP has a snap-together Integrity BL i2 blade lineup that offers two, four or eight sockets in a one, two, or four blade setup as well as the Superdome 2, which crams eight two-socket (and 18U high) Tukwila blade servers into a modified BladeSystem chassis c7000. The BL blades have been shipping for the past month, and the Superdome 2 machines, which sport HP's sx3000 chipset, will start shipping in October.
For all the talk about power, cooling and standardized components, what IT managers are worried about is money. The Coleman Parkes study had IT managers rate what the key attributes of their mission critical systems were when they were making buying or retention decisions.
Predictably, since El Reg readers live in the real world, you guessed that 68 per cent of those polled said that the initial cost of the system rated a four (very important) or five (quite important) on the one through five scale used in the survey. Total cost of ownership was rated a 4 or 5 by 49 per cent of the IT managers polled, and scalability and agility to meet changing business needs only ranked 34 per cent, as did streamlined manageability.
Infrastructure standardization rated 33 per cent of those polled at the four or five concern level. Reducing power and cooling were only important at 17 per cent of sites, and reducing the data center footprint was only important at 11 per cent. Getting servers installed quickly was rated as very important or quite important by 66 per cent of those polled, and maximizing reliability and availability was a concern at 47 per cent of the sites polled.
Coleman Parkes asked UK IT shops to ponder the future, over the next three years, and think about what advancements over the next three years would be interesting to them for their mission critical platforms. A "scale up mission critical blade with a significantly reduced footprint and the power/cooling savings you'd expect of x86 blades" was only rated a 4 (very attractive) or 5 (quite attractive) by 28 per cent of those polled. (Although when you throw in the non-committal 3s, it rises to 60 per cent of those polled. But then again, that also means that 40 per cent of those polled don't think mission critical blades are even vaguely important.)
Having a single pane of glass to manage your mission critical (meaning RISC/Unix or proprietary midrange) and x64 servers only got a four or five rating by 25 per cent of customers, and half said they either didn't care or really didn't care. Some 38 per cent of those polled did say that converging mission critical and x86 servers with storage and networking infrastructure was a four or five issue and only 14 per cent rated it as a one, meaning not important at all.
So maybe they need blades and they just don't realize it yet.
As a scrub against the ranking data, Coleman Parkes asked some straight "yes-no" questions. And the irony is that 36 per cent of those UK IT shops polled by Coleman Parkes said that "unified blade architectures will deliver real converged infrastructure solutions" (do people really talk like this?), and 38 per cent said that "converged infrastructure is essential to the mission critical systems for this company" and 36 per cent agreed that "a blade-based approach to mission critical systems allows us to add technology simply and efficiently to quickly meet changing demands."
As if converged infrastructure required blades. There is no good reason that converged infrastructure has to mean blades. You could argue that blades and their lack of cross-vendor standards and the lock-in this engenders is precisely what customers don't want. But the Coleman Parkes survey did not ask questions about that - or if it did, HP's report, which the company graciously have to El Reg, did not disclose that data.
The UK IT shops seem to run the gamut from skeptical through to hopeful that blade infrastructure can save them money through "total cost of ownership" reductions. About 35 per cent of the IT shops in the UK poll said implementing Unix on blades for mission critical workloads could cut TCO by up to 10 per cent, and 30 per cent said the reduction would be somewhere between 11 and 20 percent.
Some five per cent said it could be a TCO reduction of 41 per cent or higher, which seems absurd. But these companies may be using very expensive Unix systems. The mean TCO savings expectation across all the companies polled was 18.9 per cent.
The only thing that most of the UK IT shops (77 per cent of those polled) agreed on was that they wanted mission critical systems with reliability and scalability, but build using industry standard components. Now, "industry standard components" is an intentionally vague term, and it could mean building a Unix box on x64 processors or it could mean doing what HP is doing, which is shoehorning Itanium 9300 processors into its BladeSystem boxes and then lashing them together to create scalable systems.
There's nothing wrong with this, unless customers really want x64 blades with Itanium-class reliability features. Then HP-UX is in the same position as IBM's Power blades running AIX, since neither supports their Unix on x64 iron. (As if Unix shops were eager to port their applications anyway; the lack of compatibility and the cost of conversion is what keeps them paying a premium for RISC/Itanium iron. It is easier and perhaps even better to stay than to make a jump, even if it can be more costly on paper.)
Next page: Unix is hanging tough
"unless customers really want x64 blades with Itanium-class reliability features."
And what meaningful RAS features would IA64 in Integrity be able to provide that AMD64 in Proliant hasn't already been able to for years?
Yes I completely believe that an IA64 system can deliver better RAS at the application level than an AMD64 system, but as far as I can tell the missing RAS features are in the OS (be it NonStop, VMS, or maybe even HP-UX), they do not appear to be (as far as I've been able to determine) in the chip or even in the board.
If there is any truth in the IA64 RAS-superiority claim, it'd be lovely to actually see it substantiated with hard evidence at least once before IA64 goes end of life, rather than the usual deal with PR spinners just regurgitating unsubstantiated politically-motivated (not technology-based) HP guff.
I can only speak as a developer
But I'd rather develop on Solaris than HP-UX. HP-UX is like all the pitfalls of Unix with none of the benefits.
I'm going to miss firmware that isn't stupid.
I'll especially miss Openboot PROM (and, to a lesser extent, its Macintosh and RS/6000 cousins that lack 'sifting' and 'help'). Whenever I sit down at a machine that:
requires a working graphics device for me to talk to the firmware;
cannot provide a diagnostic console on a serial port unless, perhaps, an OS is booted;
cannot offer an interactive command prompt where I can probe devices, run diagnostic routines, or write arbitrary Forth programs;
I feel like I've stepped into the third world of computing. Where I expected to find a toilet, I found a hole in the ground. Unfortunately, the third world won. I guess people think the hole is more user-friendly and intuitive; running water and toilet paper just confuse users, so why provide them?
This EFI crap is no improvement, though at least it can offer a serial console. What the IEEE 1275 stuff accomplished through simple flexibility, EFI accomplishes through bloat and sprawl. One little DIP package on your board that costs less than $20--that's all you need! Hell, I'd take Alpha SRM, SGI ARCS, VAX VMB, whatever that serial console for the HPPA machines was called, or even the old sunmon over EFI and this BIOS crap. Various serial consoles on various machines have saved me a heap of trouble on numerous occasions. On the other hand, I have repeatedly been screwed by the lack of such functionality in most x86 hardware; many a time have I thought, "Now, if this stupid thing had a serial firmware console, I might just be able to find out what the hell is wrong with it. Too bad it has BIOS, so I don't get to do that."