HP dons blades to scale Superdome 2
Mountain out of Tukwila molehills
After several months of not talking about it in the wake of Intel's February launch of the "Tukwila" quad-core Itanium 9300 processors, Hewlett-Packard is finally describing what those machines will be.
At the HP Technology@Work 2010 conference, which is being held from April 26 through 29 in Frankfurt, Germany, HP is launching new snap-together Itanium-based Integrity blade servers that offer from two to eight sockets in a single system image. The company is also divulging plans for an Integrity rack server, designed to replace several machines in the current Itanium 9000 and 9100 lineup, and of course, it's raising the curtain just a little bit on the high-end Integrity Superdome 2 server.
The blades are available now, but the rack machine and the Superdome 2 are slated for "later this year," according to Lorraine Bartlett, vice president of marketing, strategy and operations for the Business Critical Systems division at HP. (BCS is part of the Enterprise Servers, Storage, and Networking group). With the Tukwila chips originally slated for 2007, then 2008, then 2009, and then finally pushed to early 2010 as one technology after another was changed around the chip, waiting a few extra months to actually get a full understanding of the Superdome 2 machines is probably not going to kill HP-UX shops that are dependent on these platforms to scale up their workloads. But it may drive them to drink. Again.
If you were expecting spec sheets, data sheets, and loads of information on all the new Tukwila systems, you are bound to be disappointed. HP is just not providing this information yet, even on machines that are supposed to be available starting today. The briefing deck was one of the thinnest I have seen in more than two decades of watching systems for what many (myself included) consider such an important server announcement. One on which somewhere between $4bn and $5bn of server revenues and heaven only knows how many more billions in storage, software, and services sales also depends.
Here's the family photo of the new Tukwila machines from HP:
Even though the image is smaller, that machine on the left is the Superdome 2, the kicker to the current high-end 64-socket Integrity machines. Across the top are the Integrity BL860c i2, BL870c i2, and BL890c i2, which are really just BL860c i2 blades that snap together to create ever larger SMPs. In the bottom middle is the rx2800 i2 rack-mounted server. And off to the right is a BladeSystem Matrix setup using the new Integrity blades and running HP-UX 11i v3.
The new Tukwila-based Integrity blade servers are based on Intel's "Boxboro" chipset, the same one that supports "Nehalem-EX" Xeon 7500 processors. (Machines such as the ProLiant DL580 and DL980 that El Reg told you about earlier this month but which are not being formally announced today and may not be for some time yet). Kirk Bresniker, vice president and chief technologist for the BCS unit and an HP Fellow to boot, says that by putting the Blade Link SMP scalability interconnect on the front of the blade, HP was able to allow the Integrity blades to slide into the same c3000 and c7000 chassis it already uses for x64 and Itanium blades.
The Tukwila blades are full height blades, like the two-socket and four-socket Itanium 9000/9100 machines they replace. The two-socket Tukwila blade offers about three times the compute capacity of prior BL860c Itanium blades. By the way, you can't make a six-socket SMP box as far as I know using the Blade Link. (There isn't a BL880c i2, but the product naming left room for it). And based on internal benchmarks conducted inside HP back in February, the company reckons that the new Integrity BL i2 lineup offers up to nine times the oomph in half of the space as comparable earlier Integrity rack-mounted systems. In fact, these blades will be replacing the rack-mounted 4U and 7U SMP machines HP sold with prior generations of Integrity iron.
If you haven't gotten the message that HP is all about blades, you will by the end of this story.
The new Tukwila BL i2 blades support the homegrown Virtual Connect Flex-10 virtual networking for servers and storage, which is popular on HP's ProLiant x64-based BladeSystems. Integrity Virtual Machines, which is HP's home-cooked virtualization technology for Itanium machines, is also supported on the new boxes, as is HP-UX 11i v3. Presumably, HP will have nice things to say about OpenVMS and NonStop on these machines at some point, but it didn't in any of the materials I have laid eyes on.
The remaining feeds and speeds for the new Integrity blades are a mystery because HP didn't have spec sheets ready as El Reg went to press. But in an ironic shift among server makers, the prices for base machines are available. A BL860c i2 blade costs $6,490 in a base configuration; a BL870c i2 costs $13,970; and a BL890c i2 costs $30,935.
Very little was divulged about the rx2800 i2 rack server besides its name and the fact that it was being put into the field to appease customers who are just not quite ready for blades, like remote offices with modest compute needs. The rx2800, says Bresniker, supports 24 DDR3 memory slots, compared to eight in the rx2600 entry machine it replaces, and adds that the new eight-core box crams the performance of the rx6600 (an eight-core, four-socket machine weighing in a 7U) into a 2U space. That's more than a factor of three improvement in compute density. From the outside, the rx2800 i2 looks more or less like the rx2600 it replaces, with room for eight 2.5-inch disks mounted in the front.
Next page: Beyond Superdome
Mister TheRealStory strikes again
"But if you insist on including them it makes Jesper's trying to compare the 1.9GHz P5 with the top-end P6 even funnier!"
Man you are full of it. You keep insisting that a FUD/misquote on a HP TheRealStory of a IBM press release is more true than the actual press release.
No matter how you twist it, no matter how much you try to make fun of others. Your source of wisdom has been exposed as being HP's TheRealStory, which says a lot about the true level of your IT skills.
Welcome to the year 2010.
"No, I think that some salespeople (and outsourcers) would be a lot happier .... Hey, I wonder if EDS do POCs....!"
Customers are normally very happy when I leave a project, I Did a project here for a small customer, which just finished the other day, where they ended up with having x2 - x3 the capacity without paying a single red dime for more per month, all they had to do was buy more RAM. Very simple I just redesigned the solution exploiting the capabilities of the system.
All well documented, with design documents changes etc etc. Sure they wanted to see a test first, so we made the changes to their development environment first. And when they then rolled out their whole new SAP release and increased the number of users by a factor of x3, then one change without any downtime and they were were able to run x3 users simply by increasing the utilization of the hardware, by using overprovissioning.
"Yeah, I know, I've done contract work in Denmark ... I expect EDS is going to be happilly meeting and beating you in many deals to come!"
Well I don't recall having meet you, but then again I've bump in to so many 'IT cowboys' in my time, so only the real sharp ones have made an impression.
As for EDS they have tried to pick me up several times, sure the money offered were good. But they don't have an operation here in Denmark. And I don't want to have to go to Germany or India to do projects. I work where I work, cause I can go and talk to people in person, the people we have here are highly skilled, and cause I want to be with my family.
"Worked on hp-ux long before AIX, and CUOD support was not abvailable with HACMP until November 2008 (well, announced then, I'm told it wasn't actually working until a lot later). Your feature sell just failed, try again!"
Again.. you simply don't get it the hypervisor will mask that for you. HACMP will never know, CUOD will be done to the shared pool. Man.. it's like explaining colors to a guy who sees everything in black and white.
Sure if you ran AIX on "Bare metal" on POWER back when that was done you had the problem you are talking about. But honestly who in their right mind have done that for years. Come on keep up with the technological development. You sound like the the old mainframers here who keep talking about punch cards.
"Less than five minutes on swinstall, and then can be run by single line commands or via SAM or the web interface SMH. You can even do the swinstall work via SMH."
*CACKLE* Again the "IT cowboy" favorite remark, "Less than five minutes on ..". Again you don't get it. Now that is also one of the things we do, and actually make quite good money on. Cleaning up people's systems after they have been run by 'Less than five minutes' consultant cowboys. You would be surpriced on how many customers come to us with their systems, saying please help us clean up our systems. Usually after they have had a crash, and have serious trouble recovering cause nothing is documented, cause to many "Less than five minutes" cowboys have been on the system making their expert recomendations.
Do you know what ITIL is ? Change mangement perhaps ? CMDB?
"No, you completely failed to explain any ... the IBM method, one power issue affects everything in the server. "
I have explained, it is you who don't get it.
Let me try again .. the Hypervisor makes an abstraction layer between the physical machine and the virtual machines, so You don't know what physical processors your virtual machine is running on, you don't know what memory modules it uses.
All the nasty hardware stuff is hidden by the hypervisor and the IO by the VIO servers.
So a hardware failure like for example a Processor failure will not have an impact on my virtual machines, if I setup things right, at all. Now if I on the other hand do max out the SUM of entitled capacity, or for example have to deallocate a whole Processor CHIP, so the hypervisor cannot forfil it's entitled capacity guarentees. Then the hypervisor will take the least important virtual machine and give it the hammer, hence keeping production systems running without any dangers to SLA's or customers data.
And what you don't get it that I run perhaps 10-30 virtual machines using somewhere between 40-50 Processors on a machine with 16 physical cores, like the the power 570. On our old POWER5/5+ we are perhaps running 30-60 virtual machines, using 120 Processors worth of CPU power.
And you seem to forget that POWER hardware has even more hotswapable and redundant parts that your favorite HP itanium servers, I mean a POWER7 box like the power 780 can even hotswap the system clock card.
"And seeing as IBM love chucking non-redundant and non-discrete components all over their designs (remember those old IBM blade backplanes?), it is a problem just waiting to strike."
Ehh.. have you looked inside at HP blade chassis backplane recently ? Blades are generally crap, no matter the vendor.
"Not surprised you didn't know that. After all, you live in Denmark and have to send out for consultants with hp knowledge. And Sun, and IBM. Actually, do you have any skills in Denmark?"
Actually I am so fortunate that most of my Unix sysadmins actually have at least a bachlor degree in CS, and not one of those that you buy over the internet. Again one of the reasons I like to work here. People actually have skills and know what they are talking about. It's not all hot air.
"Not the same as what the combination of IVM, PRM and WLM can offer, which allows you to avoid having to move workloads between servers. And they all work and integrate together with hp's monitoring and reporting software, not like the hodgepodge of IBM tools."
*CACKLE* Yeah right. Just buy the whole HP Infrastructure solution pack.. yeah.. Lets see.. we need to architect, setup and manage.. vPars, nPars, Ticap, IVM, PRM and WLM, to acomplish what the hypervisor does out of the box, sure you also have to architect setup and mange the hypervisor, and to be honest the hypervisor doesn't yet do cross hw workload management.
Ok, I get it now.. you like the HP Itanium solution, cause it is your meal ticket, if it wasn't this complex you would have less work. Now that is what you don't like about a AIX/POWERVM/POWER solution, it's simple and easy to manage.
With regards to the whole quote from HP's real story.
The POWER release, counting from POWER4 and forward, work like the Intel x86 Tick Tock release strategy. So I couldn't care less what you say. You are just plain... wrong.
Amazing.. You are really will go to great length to defend a HP marketing site.. amazing.. no wonder why people don't like you here. But again your arguments are flawed and to be hones not in connection with reality. I am actually a bit shocked. It's very simple, the HP Marketing site is wrong. Nobody besides you and HP.. well SUN marketing counts POWERX+ processors as a new generation. Even wikipedia got it right http://en.wikipedia.org/wiki/IBM_POWER
And simply just repeating something that is wrong doesn't make it right.
"LOL!! You even try and compare the oldest and slowest P5 chips to the fastest P6 to try and make the P6 look good! That's just dishonest."
No, it is what is mentioned in the IBM press release. The first released POWER6 was the 4.7GHz in the power 570, the fastest POWER5 was the 1.9GHz POWER5. And I even picked the same physical machine. You can twist and turn. But you are wrong, but hey you have also clearly demonstrated that you will continue to push something you clearly can see is wrong just for the .. well.. whatever makes you tick.
"So far you haven't debunked anything, just shown us all why outsourcers and other vendor puppets shouldn't be trusted."
Well we all know who is the vendor puppet here. The only quotes you can make is from HP marketing sites. It is all you have besides denying clear facts. It is in fact rather pathetic. Matt.. people are laughing at you, not with you.
"I predict an upturn in the demand for POCs in Denmark if you carry on posting your "debunkings". I can spare a few weeks this year if you need someone with actual tech knowledge to come over to do the POCs for you."
Well I love POC's done alot of those these last 20 years, in the role of everything from tech guy over architect to Technical Project Manager. But I've cut down on those. Being a family man now doesn't really allow me to spend 1-2 months somewhere in the US, Ireland or even in other parts of Denmark 3-4 times a year.
And no thanx I don't like using 'cowboys' for POC's. I know your type, worked with many of them, you might even say I was one myself 15-20 years ago. But today I am the wiser man. And I have no use for people who can't/won't admit they are wrong. Nothing wrong with making errors, we all do. But I ran a POC as the Technical responsible person what 7 years ago, where all the aprox 20 people I got to execute this projects were cowboys, from all over the world. 80% of them had nowhere the skills they should, and had been brought in for, and half of them would not tell the whole story, when they had a delivery, and would deliver undocumented. So I and a college ended up having to do most of the work ourselves, so I ended up having to be a Oracle,SAPbasis, C++ developer, AIX, Alpha/POWER, EMC DMX specialist and had to write the whole POC report myself. After that I have never used Cowboys, unless I either know them or can verify their skills.
I like to win, and do 89-90% of the time, cause I am very good at what I do. But also cause I pick the right tools and the right people to work with.
More rubbish from Matt
"But if you want to compare to Power5 it's even worse - best P5 was 2.2GHz, P6 is 4.7GHz"
You can't even get this right. Best P5+ was 2.3Ghz and best P6 was 5Ghz.
"gain per core is still only 41%"
What is the gain per core for Tukwila? The only released benchmark so far from HP says it is 13.8% (TPC-H@1000GB). That is comparing the latest late 2010 release Superdome 2 (it is not available yet) against a late 2007 Montvale Superdome. Talk about massive speed improvement in 3 years. Clock speed went from 1.6Ghz to a massive 1.66Ghz. I am awed by the great advances by Intel and HP.
And I am not even going to comment much on the fact that Itanium reached 1.5Ghz in 2003 and 7 years later it is now pushing along at 1.66Ghz.