This article is more than 1 year old

IBM shoots higher and lower with x86 Flex Systems

Plus: Expansion node for GPUs, flash, and other goodies

Big Blue is adding two new compute nodes based on recent Xeon processors from Intel and a PCI expansion node that can be used to strap on GPUs, flash storage, and other system goosers as it continues to flesh out its Flex System modular servers - the basis of its PureSystems integrated server-storage-networking stacks and the PureApplication automated cloudy tools that ride on top of the stack. IBM .

Processors from AMD are still missing in action in the Flex System compute nodes, and Alex Yost, vice president in charge of the PureSystems stacks at IBM, would not comment on any plans IBM might have about adding Opteron processors to the Flex System compute nodes at any time in the future.

Obviously, if there was enough customer demand, the company could whip together a system using Opteron 4200s and 6200s with relative ease. But for now, the Flex System machines are limited to Intel's "Sandy Bridge" Xeon E5 processors as well as Big Blue's own Power7 processors.

IBM launched the "Project Troy" Flex System iron back in April, which features a 10U rack-mounted chassis that puts up to 14 horizontally mounted single-width server nodes into the chassis.

Like blade servers, these Flex nodes have onboard and mezzanine cards for server and storage networks and integrated Ethernet, InfiniBand, and Fibre Channel switching. But by reorienting the servers and making them half as wide as the chassis as well as considerably taller than a BladeCenter blade server, IBM can cram regular components into the server nodes without causing overheating issues. IBM is also providing double-wide nodes that double up the processing and memory capacity compared to the single-wide nodes.

The Flex x220 server node

The Flex x220 server node

With the initial Flex System iron, which El Reg went through in great detail in the wake of the announcement, IBM had a single-wide node, called the x240, based on Intel's Xeon E5-2600 processor as well as two-way and four-way nodes based on its own Power7 chips. The two-way Power7 node is the p260 and the four-way, double-wide node is the p460.

Later that month, IBM came out with a series of machines called PowerLinux that have their firmware altered so they can only run Red Hat Enterprise Linux or SUSE Linux Enterprise Server, and IBM offered a low-priced Flex System p24L node that was based on the existing p260 node except that it had lower prices for processors, memory, disks, and Linux licenses than plain vanilla Power7-based nodes.

Intel has of course added some more Xeons to the lineup since then, namely the Xeon E5-2400s and the Xeon E5-4600s, and it is these processors that are being added to the Flex System iron today.

The Xeon E5-2400s that debuted in May are aimed at two-socket servers, just like the E5-2600s that came out in March. But they have more limited memory and I/O capacity both between their sockets across the QuickPath Interconnect and to the outside world through the "Patsburg" C600 chipset. The E5-2400s also sport lower prices, and for compute-intensive workloads, they are the better option in many cases compared to the E5-2600s.

The E5-4600s use the two QPI links per socket in the E5 design to link four processors together into a single system image that has lots of memory and I/O capacity and that in many cases obviates the need to go with a more expensive and less-dense Xeon E7 processor.

Internals of the Flex x220 node

Internals of the Flex x220 node (click to enlarge)

The Flex System x220 node has two processor sockets mounted in the center of the mobo, and all nine models in the Xeon E5-2400 lineup are supported in the machine. The Xeon E5-2400s come in versions with four, six, or eight cores and varying L3 cache sizes and different features.

The x220 node has with six DDR3 memory slots per socket for a maximum of 192GB in total capacity using 16GB sticks. IBM could easily double this to 384GB by using fatter sticks, and very likely will at some point.

Memory can run at up to 1.6GHz, and IBM is using low-profile memory to cram everything into the node. In some cases, solid state drives attached to the node lid are adjacent to the memory area below them on the motherboard, so every millimeter of space counts.

Both regular 1.5 volt and 1.35 volt (known as low-voltage memory, at least for now until it becomes the new normal) memory can be used. You have to install both processors to use maximum memory since the memory controllers are on the processors themselves; if you only install one CPU, the capacity halves.

The Flex x220 node has two hot-swap 2.5-inch disk bays in the center front of the node, and these can be configured with either disk or flash drives as suits customer needs. The machine has software RAID for mirroring and striping (using an LSI ServeRAID C105 controller), and if you want to do it with hardware, you can snap in a ServeRAID H1135 and not waste CPU cycles on this RAID work.

You can add four internal flash drives and four drives that mount in the front (two in each 2.5-inch bay) for a total of eight drives and then do a RAID 5 or RAID 6 across those flashies if you snap in the ServeRAID M5115 controller. IBM is supporting 200GB flash drives now, but Yost says in the fourth quarter, IBM will add support for 400GB units, doubling the flash capacity to 3.2TB per node.

Some models come with a Gigabit Ethernet LAN-on-motherboard (LOM) interface welded on, others don't. In terms of slots, there is a dedicated PCI-Express 3.0 x4 slot that can only be used for the ServeRAID controllers.

The node has a dedicated node connector that runs at PCI-Express 3.0 x16 speeds that can link to an external PCI expansion node (more on this in a second) plus two mezzanine cards that snap into the back of the board (on the right hand side in the picture above) that each have an x8 and x4 link out to the Flex System chassis backplane for linking into switches for external networks and storage.

The Flex x220 server will be available on August 24.

The Flex System server and PCI expansion node

The Flex x240 server node linked to the PCI expansion node (click to enlarge)

As El Reg predicted back in early April when the Flex System iron first came out, IBM has indeed come up with an expansion chassis that allows for PCI-based peripherals such as Nvidia Tesla GPU coprocessors or Fusion-io flash storage (just to name two possibilities) to be linked to the two-socket x86-based Flex System nodes.

More about

More about

More about

TIP US OFF

Send us news


Other stories you might like