IBM shoots higher and lower with x86 Flex Systems
Plus: Expansion node for GPUs, flash, and other goodies
Adding in GPUs, flash drives, what else?
This expansion node is only supported on the new Flex x220 node using the Xeon E5-2400 processor and the existing Flex x240 node using the Xeon E5-2600 processor. The expansion connector on the x220 and x240 nodes hangs off of the second processor socket in these machines, so you have to have both Xeon E5 CPUs installed in the node to use the PCI expansion chassis.
An interposer cable runs from the server node to the expansion node, which then links into a PCI-Express 2.0 switch on the board. This switch has two x16 links to two mezzanine I/O ports for extra I/O into the backplane of the Flex System chassis; it also has two x16 links and two x8 links to peripheral slots.
Note: These do not run at PCI-Express 3.0 speeds, which basically double the bandwidth over PCI-Express 2.0 slots, and the mezz cards in the expansion node will also run at PCI-Express 2.0 speeds. You can plug in PCI-Express 3.0 cards and run them in 2.0 mode, however, which will impede performance in many cases.
In any event, you have two full-height, full-width PCI-Express x16 slots on the left and two low-profile x8 slots on the right. There's enough power in the card to support four low-profile, two full-height, or one double-wide PCI-Express peripherals in each expansion node.
Physically, you should be able to get two x16 peripherals and four x8 low-profile peripherals in the node. There's only room, power, and cooling for one Nvidia Tesla M2090, which is not a lot, but you can use the other two low-profile slots on the other side with one M2090 in there.
Four Xeon E5s in a double-wide pod
The Flex System x440 is the x86 companion to the Power7-based p460 node, and it uses the Intel C600 chipset and the two QuickPath Interconnect ports per socket to glue four of them together into a shared memory box that can handle much larger workloads than the two-socket variants can.
The Flex x440 double-wide, quad-socket server node
IBM is supporting the versions of the Xeon E5-4600 processors with four, six, or eight cores in the box, and it looks like IBM is supporting all eight possible options processor-wise, which is something that it probably would not be able to do in a BladeCenter machine because of the skinniness of the blades. (Oddly enough, you could get the same seven four-way servers into the same 10U space.)
The Flex System x440 machine has a dozen memory slots per socket like other E5-4600 machines to deliver up to 1.5TB of memory capacity using 32GB load-reduced DIMM (LRDIMM) DDR3 memory sticks. The machine needs a lot of space for the processors and memory, and therefore it only has room for two 2.5-inch disk drives instead of the four you might expect.
IBM no doubt expects customers to use Storwize V7000 disk arrays for application and systems software storage and only use local disks for the server node operating system (if that). In many cases, customers will just put flash storage in these bays to boost I/O performance and leave everything on external disks.
Internals of the Flex x440 server node
The machine has two Emulex BE3 LOM interfaces, each with two 10 Gigabit Ethernet ports, on the motherboard, and if you don't want to use the integrated 10GE ports you can buy a variant of the machine without them (presumably at a lower cost).
The server node has four mezzanine cards that link the server to the midplane of the chassis and then out to the integrated switches. Each mezz card has one x16 and one x8 connection running at PCI-Express 3.0 speeds. The same flash kit that is available for the x220 node is also available on the x440 node, but it will not be available until October 18. The x440 itself ships on August 24.
The two Flex System nodes can run Microsoft Windows Server 2008, Red Hat Enterprise Linux 5 and 6, and SUSE Linux Enterprise Server 10 and 11. VMware's ESXi 4.1 Update 2 and ESXi 5.0 Update 1 hypervisors are also supported on the nodes. Pricing information was not available at press time for the new iron. ®
Sponsored: DevOps and continuous delivery