Peeling back the skins on IBM's Flex System iron
More Power – and x86 – to you
Analysis IBM announced the PureSystems converged systems last week, mashing up servers, storage, networking, and systems software into a ball of self-managing cloudiness. What the launch did not talk a lot about is the underlying Flex System hardware which is at the heart of the PureFlex and PureApplication machines.
So let's do that now.
First, let's take a look at the Flex System chassis, which is 10U high and a full rack deep. About two-thirds of its depth in the front of the chassis is for the server and storage nodes and the back one-third of the space is for fans, power supplies, and switching. The compute and storage are separated from the switching, power, and cooling by a midplane, which everything links to in order to lash the components together. In this regard, the Flex System is just like a BladeCenter blade server chassis. But this time around, the layout of the machinery is better for real-world workloads and the peripheral expansion they require.
The 10U chassis has a total of 14 bays of node capacity, which are a non-standard height of 2.5 inches compared to a standard 1.75 (1U) server. The key thing is that this height on a horizontally oriented compute node is roughly twice the width a single-width BladeCenter blade server. That means you can put fatter heat sinks, taller memory, and generally larger components into the Flex System compute node than you could get onto the BladeCenter blade server. To be fair, the BladeCenter blade was quite taller, at 9U in height, but you couldn't really make use of that height in a way that was constructive. As the world has figured out in the past decade, it is much easier to make a server that is half as wide as a traditional rack than it is to make one that is almost as wide and twice as thin. And it is much easier to cool the fatter, half-width node. That is why Cisco Systems, Hewlett-Packard, Dell, and others build their super-dense servers in this manner.
And while the iDataPlex machines from IBM were clever in that they had normal heights, were half as deep, and were modular, like the Flex System design, the iDataPlex racks were not standard and therefore did not layout like other gear in the data center. (Instead of one rack with 42 servers in a 42U rack, you had 84 servers in a half-deep rack with two racks side- by-side.) This creates problems with hot and cold aisles, among other things. The PureFlex System rack is a normal 42U rack with some tweaks to help it play nicely with the Flex System chassis.
Here is the front view of the Flex System chassis, loaded up with a mix of half-wide and full-wide server nodes:
IBM's Flex System chassis, front view
The chassis has room for 14 half-wide, single-bay server nodes or seven full-wide, two-bay server nodes. You will eventually be able to put four-bay server nodes and four-bay storage nodes inside the box, with the nodes plugging into the midplane, or you can continue to use external Storwize V7000 NAS arrays if you like that better. While a single PureFlex System can span four racks of machines and up to 16 chassis in a single management domain, you need to leave at least one slot in one of those racks dedicated to the Flex System Manager appliance, which does higher-level management of servers, storage, and networking across those racks.
Take a look at the back of the Flex System chassis now:
IBM's Flex System chassis, rear view
The idea is to add server and storage nodes in the front from the bottom up and to add power and cooling modules from the bottom up as well. You can have up to six 2,500 watt power supplies and up to eight 80mm fan cooling units for the compute and storage nodes. There are no fans on the nodes at all – just these fans, which pull air from the front of the chassis, which sits in the cool aisle in the data center and dumps it into the hot aisle. There are four separate 40mm fans for cooling switch and chassis management modules (CMMs), which slide into the back of the chassis.
The CMMs are akin to the service processors on rack servers or the blade management module in a BladeCenter chassis; they take care of the local iron and report up to the Flex System Manager appliance server running inside the rack (or multiple racks). You can add two CMMs for redundancy, and you can also cluster the management appliances for redundancy, too. You can have as many as four I/O modules that slide into the back of the chassis vertically, between the fans, including Ethernet and Fibre Channel switches as well as Ethernet, Fibre Channel, and InfiniBand pass-thru switches. (A pass-thru switch is when you want to link the server nodes to a switch at the top of the rack and not do the switching internally in the chassis. It is basically a glorified female-to-female port connector with a big price.)
IBM is using its own Gigabit and 10 GE switches (thanks to the acquisition of Blade Network Technology) and Fibre Channel switches from Brocade and QLogic and adapters from Emulex and QLogic. It looks like IBM has made its own 14-port InfiniBand switch, which runs at 40Gb/sec (quad data rate or QDR) speeds and is based on silicon from Mellanox Technology, as well as adapters from Mellanox for the server nodes. Here are the mezz card options: two-port QDR InfiniBand, four-port Gigabit Ethernet, four-port 10 Gigabit Ethernet, and two-port 8Gb Fibre Channel. You can also run Fibre Channel over the 10 GE mezz card.
For whatever reason, IBM did not put out a separate announcement letter for the Flex System p260 server node, which is a single-bay, two-socket Power7 server. Here's the glam shot of the p260 node from above:
You can see the two Power7 processor sockets on the left, the main memory in the middle, and the I/O mezzanine cards and power connectors that hook into the midplane on the right. IBM is supporting a four-core Power7 chip running at 3.3GHz or an eight-core chip running at either 3.2GHz or 3.55GHz in the machine. Each processor socket has eight memory slots, for a total of 16 across the two sockets – and maxxing out at 256GB using 16GB DDR3 main memory.
The cover on the server node has room for two drive bays (that's clever, instead of eating up front space in the node and blocking airflow). You can have two local drives in the node: either two 2.5-inch SAS drives with 300GB, 600GB, or 900GB capacities, or two 1.8-inch solid state drives with 177GB capacity. These local drives slide into brackets on the server node lid and tuck into the low spot above the main memory when the lid closes. The lid has a plug that mates with the SAS port on the motherboard.
One important thing: If you put the local hard 2.5-inch disk drives in, you are limited to very-low-profile DDR3 memory sticks in 4GB or 8GB capacities. If you put in the 1.8-inch SSDs, you have a little bit more clearance and can use 2GB or 16GB memory sticks that come only in low-profile form factors and are taller. So to get the max capacity in the node, you need to use no disks or use SSDs locally.
The Flex System p460 is essentially two of these p260 nodes put side-by-side on a double-wide tray and linked with an IBM Power7 SMP chipset. (It is not entirely clear where IBM hides this chipset, but it is possible that the Power7 architecture supports glueless connections across four processor sockets.) In any event, you get four sockets with the same Power7 processor options, with twice the memory and twice the mezzanine I/O slots because you have twice the processing.
I am hunting down information to see what the pricing is on these nodes and what their IBM i software tier will be. But generally speaking, Steve Sibley, director of Power Systems servers, says that the performance of the p260 and p460 nodes will fall somewhere between the PS 7XX blade servers and the Power 730 and 740 servers and the bang for the buck will be somewhere in between there as well. The PS 7XX blades were relatively attractively priced, of course, overcompensating maybe just a little bit for the lack of expansion on the blades and the extra cost of the blade chassis and integrated switching.
Next page: Flexing an x86 node
Re: New IBM Blade enclosure or not?
Actually I think there is kind of an underlying movement in the market. A consolidation and movement towards that suppliers can do an (almost) vertical solution stack. This is kind of like back to the future, to a time where you got your whole IT from a single vendor. This is IMHO much more a trend rather than partnerships between different vendors in the solution stack.
I don't think this is a good thing for us that have to procure the whole software and hardware stacks. I think we are going to see less open standards, less portability and more vendor lock-in.
If this movement continues, and it is a big IF, there are going to be further consolidations, and to be quite frank then HP wouldn't be one of the companies that would be able to buy up other big companies quickly right now. I mean HP's long term debt is more than 50% of the current marked cap of the company. I know that the debt is in practice deducted from the maket cap, but it's still a huge chunk of depth. For comparison IBM (although having more long term debt than HP) it's still less than 15% of the total company cap, around the same % as Cisco and Oracle has.
So again if, and there is a lot of if's here, this vertical trend continues, then personally I think HP needs to merge with someone.
Re: New IBM Blade enclosure or not?
"I think you need to go tell SAP that, they have plans for a little something called Sybase. Non-SAP, there are other hp-ux options like PostgreSQL, which is a lot better and cheaper than DB2 even in the full-fat EnterpriseDB form. But I wouldn't expect an IBM troll to know that."
You are really getting desperate. As you probably know, Sybase begged SAP to certify them for SAP applications for 15 years. SAP refused. Now that SAP bought ASE, incidentally as they were acquiring mobile from Sybase, they will certify it, but Sybase has about 1.5% of the DB market. ASE was thoroughly beaten by Oracle decades ago. EnterpriseDB was never interested in Itanium until Oracle left them without a DB partner. Now, after the Oracle situation, EnterpriseDB is supposed working on an Itanium port. As PostgreSQL is not supported by SAP, Oracle ERP, or any other major application players, it isn't going to be a major deal.
"Well, seeing as IBM Software sell more software licences on hp kit than IBM's own, I'd say IBM Software was the one more dependent on hp."
I am not sure if that is true. If it is, that is like saying that Microsoft is dependent on HP because a bunch of their Server and Windows PC licenses run on HP hardware. HP x86 is commodity. If they were to go away, people would just put the software on some other x86 gear. I don't think HP ProLiant is going to put the screws to WebSphere or Windows Server.
Re: New IBM Blade enclosure or not?
>> Well, as Odyssey is still in the planning stages, it is difficult to determine what it is going to look in a few years. Nevertheless, the high point of Odyssey are that they are going to unify the Unix and x86 architectures in the same enclosure around a common chassis called HydraLynx.
It might just be in the planning stages, but there's enough publicly available material at their website to correct some of your errors...
HP already offer Unix and x86 in a common enclosure- it's called BladeSystem c7000, and you can put blades in it running Windows/Linux on x86 and HP-UX/OpenVMS on IA64. That's been in the marketplace as an offering for a good 5 years, just as IBM have offered Power and x86 in their BladeCenter H enclosure. If you think that is what Odyssey is, you are wrong.
The hardware side of Odyssey (ignoring the software/services and other components) is about:
i) Producing a "scalebale x86 blade" similar to the BL860/BL870/BL890 IA64 blades where you can grow an existing blade from a 2-socket to a 4-socket to an 8-socket blade by adding additional blade modules and then combining them together using a blade link. This is similar to what IBM do with their p5x0 components, except in a blade enclosure and without all those nasty/messy interconnect cables. This is HyrdaLynx.
ii) Producing a "x86 Superdome" - that isn't in the same BladeSystem c7000 chassis, but in a Superdome 2 blade enclosure which shares many components with the c7000, but is different in that it has a resilent compute fabric for interconnecting the blades and IO enclosures to create electrically isolated partitions, and deliver enhanced failure detection/correction on a level you see in Integrity and Power systems, but don't see in the x86 world right now (not in a Flex Chassis either). This is known as "DragonHawk" - if IBM were to do something similar, it would be more akin to a x86 version of the p795, not this Flex chassis.
>> Flex Manager combines a bunch of HP's software features in Superdome as well as bunch of other features that are currently additional licenses from HP, such as automated provisioning and build software and management through the VM layer.
A closer comparison is of course would be with BladeSystem/VirtualSystem/AppSystem/CloudSystem, which as I said previously have offered these sorts of capabilities for both Unix and x86 acrhitectures for a few years now. Any conversation about licenses is irrelevant without doing a full TCO compare, which I hope you will agree is outside the scope of a friendly discussion on a forum.
>> Flex Manager has HP Insight Manager and HP's x86 Analysis Engine functionality built in which is going to be part of these Odyssey systems.
I'd love to hear where you read that - if you talk to HP in any detail, they will tell you that the reason there isn't currently a x86 Analysis Engine similar to the one in the current Superdome 2 IA64 system, is because the x86 processors won't have the required features until the next iteration of the Xeon processor - so unless IBM have done a ton of firmware work here that they won't need in the next rev of their Xeon processors, I find that highly unlikely.