Nosing around IBM's Power6 blade
Six years coming
IBM has teased with scant details of its long-awaited updates to AIX and its Power6-based blade server line — but today the specs have dribbled out of Big Blue at last.
The first blade running on the dual core Power6 is named JS22 Express. Let's poke around:
Form factor: A single-wide blade for BladeCenter S, BladeCenter H or BladeCenter HT chassis.
Processor: Four 64-bit 4.0GHz Power6 processors, with 4MB of Level 2 cache per processor core.
Processor-to-memory bandwidth peak is 21.3 GB/s and internal I/O bandwidth peak is 5.8GB/s.
Memory: 4GB (2 x 2GB), up to 32GB max per blade — with four DIMM slots, ECC Chipkill DDR2 SDRAM at 667MHz (1,2,4GB DIMMs) or 533MHz (8GB DIMMs).
Storage: One 73GB or 146GB 2.5" SAS, at 10k rpm (non-hot-swappable)
Why hello there.
Networking: Integrated P5IOC2 controller with two Host Ethernet adapters. Support for optional dual gigabit Ethernet daughter card.
Optional connectivity is 4Gb/s Fibre Channel, 1 or 10 Gigabit Ethernet, 1x or 4x Infiniband, iSCSI Expansion Card and Myrinet
More I/O plz: PCI Express connector for high-speed daughter cards. Integrated connector for legacy daughter cards.
Operating system support: AIX v5.3 or later, SUSE Linux Enterprise Server 10, Red Hat Enterprise Linux.
The JS22 Express is listed at $10,363.
A base JS22 system has four 4GHz cores (activated), 8GB of main memory and no disk drives. It's listed at $6,699 and will be available November 30. ®
Not enough I/O bandwidth or memory
The POWER6 processors are very powerful. But IBM's BladeCenter blades were designed in the era of single-core x86 processors. This limits memory capacity and I/O bandwidth.
The POWER6 p570 can support up to 96 GB RAM per dual-core processor. The POWER5 servers could support up to 64 GB RAM per dual-core processor. This blade only supports up to 16 GB RAM per dual-core processor. So memory resident apps need not apply.
But let's assume your data is not in memory. That means you will access it over the network. But unless you use the dual-port 10 Gigabit Ethernet card or a dual-port, double data rate InfiniBand card, the network will be a bottleneck on blades with this level of performance.
My guess is most POWER6 blades will be used in HPC clusters using InfiniBand or 10GE networks. But it will have to be applications which do not require large amounts of local RAM.
Right now the POWER6 blade looks like it will be very fast at waiting.
It's quite sweet. I installed/configured/set up our application for a customer on a 9117-MMA (570 POWER 6 with 16 cores) a couple months ago. I made a partition to load up the beta AIX 6 during testing and I liked what I saw.
Those chips run stuff nice and fast. I can’t wait until they get rolled out on the other product lines.
I've spent the last few days being blue-rinsed by IBM on p6 & AIX 6. Very nice... As well as the v. desirable new hardware, AIX 6 (nee 5.4 :-) has got some damn clever tricks too.
IBM's LPARs let you carve your machine up as you see fit (starting from POWER4). POWER5 brought in the micro partitions (see Cesar's comment), which let's you finesse things considerably.
AIX 6 brings in Workload Partitions (WPARs), which let you effectively slice your O/S instance even further (I think this is Sun's containers style operation?). Because it's AIX 6 doing this, the function is available on POWER 4, 5 and 6 (but not older kit) - you *don't* have to have p6.
Maybe a good way to tell how this plays is to see how the competitors (HP, Sun) try to play down IBM's announcements. Right now, I can't see any real compelling reasons not to play the IBM game, but then I may be a bit biased (no, I don't work *for* IBM, but I've spent a whole lot of time working *with* IBM - anybody remember 6150 / RTPC?).