Time to say goodbye to Risc / Itanium Unix?
Depends upon your coffee cup collection
Mission Critical Twenty years ago open systems was the battle cry that shook the absurdly profitable proprietary mainframe and minicomputer markets.
The proliferation of powerful and less costly x64-based systems that can run Solaris, Linux or Windows is making more than a few Unix shops think the unthinkable: migrating away from Unix for their mission-critical workloads.
The dot-com crash was the last hoorah for Unix systems. Unix machines accounted for nearly half of worldwide server revenues, shipments ran at about 75,000 units per quarter and revenues were about $2.5bn per quarter.
Hit for six
Shipments and revenues took a hit in the recession that ran from 2001 through 2003, and Unix also vendors suffered in the slump of 2008-9.
Unix now represents about 20 per cent of worldwide server sales. In the first quarter of 2011, according to statistics from Gartner, Risc/Itanium server sales where Unix is the primary operating system started heading back towards 50,000 units and revenues hit $2.6bn. The Unix market has rebounded to where it was after the dot-com crash.
Meanwhile, the Windows and Linux markets have surged, driving about two and a half times as much revenue and accounting for the bulk of the two million or so servers that get shipped in a quarter.
And even as Unix has recovered, the market is adopting Windows and Linux at a feverish pace, predominantly on Xeon and Opteron server platforms.
The same economic and technical pressure that Unix put on proprietary systems is not letting up on Itanium and Risc servers. Organisations look at their data centres and see the big cheques going out to IBM, Oracle and HP, the three remaining commercial Unix suppliers, and start asking a lot of questions about the alternatives.
At the start of the Unix revolution, the processors and system components in Unix machines were leaps and bounds ahead of what x86 server makers could deliver in terms of capacity and reliability. A Unix system was a safe bet.
So safe, in fact, that when the dotcom bubble started to inflate in the late 1990s a Sun Microsystems Sparc server and an Oracle database were the default platforms for Web 1.0 startups – a fact that made Sun and Oracle stinking rich and IBM and HP envious.
HP may have moved from PA-Risc processors to Itanium chips from Intel, but Integrity machines are still seen as more expensive than x64-based alternatives running the same workloads. Itanium is, for all economic purposes, no better (or worse) than a Risc processor.
The pace of change for Risc and Itanium processors has slowed a little in the past decade and enhancements have been coming to x64 processors from Intel and AMD. The x64 processors have grown up, with 64-bit memory addressing and a slew of reliability features that previously were part of only mainframe or Risc/Itanium systems.
The expanded set of machine check architecture features with the Westmere-EX Xeon E7 processors announced in April are an example of such RAS enhancements.
Salute the kernel
Perhaps more importantly, Windows and Linux, the operating systems favored on x64 platforms in the data centre, have also improved greatly over the past decade in terms of reliability and scalability.
The kernels have been tweaked to support SMP and Numa scaling, and have real-time options for applications where low-latency is the primary desirable characteristic (think hedge fund trading systems).
These latest Windows and Linux platforms also sport virtualisation hypervisors that can stand toe-to-toe with virtualisation technologies created more than a decade ago for Unix systems.
The evolution of x64 hardware and the continuing improvements in Windows and Linux mean Unix shops can contemplate moving off their Risc/Itanium iron. But the higher prices for hardware, software and support on Unix platforms is what actually makes a certain percentage of them actually go through the process.
Over the past two decades, I have developed a rule of thumb for the back-office, transaction processing workloads that mainframes, proprietary minicomputer and Unix machines have tended to run.
Here's the rule: for any given workload, if a Windows or Linux stack running on an x64 server costs a certain amount, then to drive the same workload on a Risc/Itanium Unix machine will cost roughly twice as much, and a proprietary mid-range or mainframe box with the same capacity will cost twice as much as the Unix alternative.
The relative prices depend on workloads: mainframes do better on batch serial workloads for which they have been tuned, and that is why they drive about $4bn in revenues today.
And Risc/Itanium machines have offered memory, CPU and I/O bandwidth that was not available in x64-based systems, and that is why they continue to drive another $15bn in revenues worldwide.
Not only is the hardware more expensive per unit of capacity, but so is the software. Take Oracle's processor core factoring scheme for its database and middleware software, for example. If you go with per-processor core pricing, you count up the cores and then multiply by a scaling factor to come up with the price.
On Oracle's own Sparc T3 processors, the scaling factor is 0.25 per core, which means you get a software licence at a quarter of list price; software support for databases and middleware is 20 per cent of that price per year on top of that.
The Sparc Ts have lots of fairly wimpy cores and Oracle is trying to compensate for that. On Sparc64-VI and Sparc64-VII processors and early IBM Power and HP PA-Risc chips, the scaling factor is 0.75 per core, so you get a bit of a price break, and on the new Sparc64-VII+ processors from Fujitsu and Oracle the scaling is 0.5 per core, the same as for Xeon and Opteron processors.
But if you want to use an IBM Power6 or Power7 chip, a System z mainframe or an Intel Itanium 9300 chip, you pay full price per core.
If you think it should cost the same to run Oracle software on any machine, you are not alone. But IBM would not agree with you.
IBM has its own processor value unit software pricing scheme which is used for on-premise machines as well as for Amazon's EC2 and IBM's own SmartCloud clouds.
IBM's pricing scheme gives 50 per cent price break on Opteron and early Xeon processors, and a 30 per cent price break on newer Xeon 5600 and 7500 processors.
However, the fatter Xeon 7500 and E7 processors are priced the same as IBM's own Power6 and Power7 processors, as well as Oracle/Fujitsu UltraSparc and Sparc64 processors and Intel's Itanium chips.
Why do mainframe and Unix vendors charge more for their systems? Because they can
And in many cases, Power6, Power7 and Xeon 7500 and E7 processors have software costing the same per core as IBM's System z machines, which is 20 per cent over the standard price.
Why do mainframe and Unix vendors charge more for their systems? Because they can. Companies that have coded a trillion line of mainframe code are not about to save a few million or even tens of millions of dollars on a migration to Unix, Windows or Linux systems and incur billions of dollars of risk.
Organisations that have Unix skills are similarly unwilling to move to a new server architecture and operating system at the same time (although if they are using packaged software and migrating to a new version, this kind of transition can be done less painfully than actually porting home-grown applications from a Unix box to a Windows or Linux system).
The other factor that helps Unix systems persist in the data centre is the competition from the big three system vendors – IBM, Oracle and HP – and between the two big database and middleware providers, Oracle and IBM.
Those comparisons outlined above are vendor list prices and where two or more vendors are brought in to compete, they can drop significantly. Sometimes a discounted Unix system can come down to the same price as an x64-based system of equivalent performance and capacity.
An x64-vendor pushing Windows or Linux alternatives has to push even lower to win the deal, and there just isn't as much room for price cutting. The competition in the Unix space makes it a healthier market, much as the mainframe racket was when Amdahl and Hitachi were grinding away at Big Blue in the 80s and early 90s.
The secret is to get a collection of Intel, AMD, Oracle, IBM and HP coffee cups and make sure vendors see that theirs is missing when they come a-calling.
And the most important thing is to move your workloads off Unix systems only when, and if, it makes sense for your company, not for the vendor pushing x64 alternatives.
It is always easier to start software projects on a new platform than to try to move an older system to a new platform. If you want to save money and grief, this is probably the best way to do it.
The easiest thing to do, however, is compute on the platforms you know how to run best. More than any sticker on a shiny new server, that will determine your real costs over the long haul. ®
Is it just me, or...
Does anyone else find the idea of "Linux vs Unix" as nonsensical?
In my little world view, "Unix" is a generic term that encompasses a wealth of OS implementations, including: AIX, Solaris, IRIX, HP-UX, Mac OS X, BSD, and "Linux", amongst others (and, yeah, I get to work with old stuff). None of the above are interchangeable, and they all have strengths and costs and weaknesses... but I submit that the differences between Red Hat, Suse, Ubuntu are not qualitatively different from the differences between Solaris and Red Hat, Suse and AIX, etc. But throw someone comfortable with from any of those into VMS and watch them flounder...
[ The differences change depending on viewpoint: from the perspective of a driver developer, all Linuxen tend towards looking the same, but very different from e.g. AIX; from the perspective of a developer using an X-based toolkit, they all tend to look similar with trivial differences in (e.g.) type faces right up until you get to integration with desktops. Etc. ]
In sum, this article is really not talking about "Unix" vs anything, but proprietary hardware vs commodity hardware. Turns out the former is more expensive but tends towards "better", while the latter is cheaper.
Gosh. Colour me surprised!
X86_64 was always a band-aid
X86 was always the bottem end of the performance range (actual and per watt) - but what it is, is _cheap_.
Power7 boxes might be faster, and so is AIX, but for the cost differential a company can have several X86 boxes and a couple of spares.
Wintel boxes will always be slower until they can remove all the compromises which are required to still boot DOS. There are dedicated X86 systems out there which are a lot better optimised (but the price goes up)
Linux is invariably faster with tuning - the defaults are for a wide range of operations, so I take performance comparisons like this with a bucket of salt (I've achieved speedups of 10-20X or more with appropriate tuning of boxes for the tasks they're performing) unless full details of the configurations and tuning mechanisms are provided.
The _BIG_ advantage of Linux is portability. Source code written on X86 should compile and run happily on MIPS/ARM/Power/Big Iron/Itanic/Whatever comes along.
Linux may not be Unix, but it's virtually identical in every respect that matters - and there is more/cheaper support than there is for the older *nixes. Because of that the market is really Win/*nix/VMS/Big Iron - and yes I still have VMS systems (brand new) in $orkplace for specific tasks because they're best suited to the task.
Unix vs Linux is a strawman unless you start breaking the *nixes up into their component flavours and assess competition between them.
I'm a happy Linux admin, but I also admin other Unixes, VMS and Windows(when I'm forced to). The point is that one should choose the software for the task then the hardware and OS it best runs on. Anything else is the tail wagging the dog.
Right now I'm looking forward to the arrival of MIPS and ARM based systems for testing. If they work as I expect them to then we'll be achieving far higher throughputs per rack with far lower power consumption figures - speed isn't everything.
Mine's the one with the fondleslab in the pocket, setup for remote X work.
Throughput is key
I've done some benchmarks on a couple of systems recently.
The first was a Power 7 beast (there is no other word for it.) running AIX & Linux
The other was a quad socket Intel Xeon (latest models) rig running Linux
The application software was setup identically on the three configs.
The Power 7/AIX managed 36000 messages per second.
Running Linux on the same hardware gave us 26500/second
The intel machine managed a paltry 14,240/second.
sure the Power 7/AIX combo is expensive but the X86 world (in these particular circumstances) lags well behind the RICS System.
Then add in the mix the LPAR management in the Power Range and it is one hell of a solution.
IMHO, the X86 architecture is well past its sell by date. Intel recognised this. Itanic was not the answer. Simply die shrinking X86 to improve performance will not make up for the obvious shorcomings in the CPU Architecture.