Cisco fattens up UCS with Nehalem EX
Four-socket blade and rack boxes
With its Tuesday introduction of two new servers based on Intel's eight-core Nehalem-EX Xeon 7500 processors, server wannabe Cisco Systems can now run with the big boys - well, the midrange boys anyway.
Cisco's new machines appear to be the ones that El Reg has been saying the company needs to get into the field to compete with the tier-one server makers. And indeed, to become one.
As El Reg previously reported, Cisco already upgraded its existing two-socket B-Series blade servers for the California Unified Computing System and their C-Series rack-based brethren back in mid-March, when Intel rolled out the six-core Westmere-EP Xeon 5600 processors. While two-socket boxes are fine for plenty of workloads, Cisco needs some bigger iron if it wants to be a player. Customers want headroom - even if they don't use it - and many also have bigger jobs that require more processing capacity than the two-socket boxes Cisco has been selling since last year.
Unlike IBM and Dell, which have used the expanded memory capability of the Xeon 7500s to make two-socket rack and blade servers with much more memory capacity than the Xeon 5500s and 5600s offer as well as four-socket racks and blades, Cisco is just rolling out one rack and one blade server based on the Xeon 7500s.
Cisco doesn't need to monkey around with putting Xeon 7500s into two-socket machines because it already has its own memory-expansion ASIC, which offers 2.7 times the memory capacity of the fattest Xeon 5500 and 5600 servers with 18 memory slots, in selected models of the B-Series and C-Series boxes. This ASIC doesn't come cheap, but it allows customers to build systems using cheaper 2GB and 4GB memory sticks instead of having to use 4GB, 8GB, and now 16GB sticks to bulk up.
On the two-socket C-Series machines from last year, for example, a C210-M1, which has a dozen memory slots for a maximum of 96GB of memory, sold for $3,309 without any memory in the box, while a C250-M1 with 48 DIMM slots cost $10,339. At the time, 8GB memory sticks cost $1,857, 4GB sticks cost $355 or $375 depending on the speed, and 2GB sticks cost only $189. For fat memory configurations, the math can work for a much more expensive server using cheaper memory.
With the two Xeon 7500 machines announced Tuesday, the situation is more straightforward. These machines appear to be kosher Nehalem-EX boxes based on the Intel 7500 chipset, with the Cisco memory ASIC not being used to extend capacity. That doesn't mean such machines packing the Cisco ASIC won't be available in the future, of course.
The UCS B440-M1 blade server fits in the same UCS 5100 series chassis as the other horizontal blades in the California system. Up to four of the full-width blades can fit in the 6U blade enclosure, for a total of 128 cores in a single chassis. The B440-M1 has 32 memory DDR3 slots and Cisco is not yet supporting 16GB memory sticks, so maximum memory tops out at 256GB.
The B250-M1 (supporting quad-core Xeon 5500s) and B250-M2 (supporting quad-core and six-core Xeon 5600s) have 48 memory slots, topping out 384GB. This is a more expandable blade if what you need it memory capacity. And the Xeon 5600s are a lot less expensive than the Xeon 7500s. If you need more cores, and you like lower clock speeds, then the Xeon 7500s and the B440-M1 is what you need.
The Cisco Systems B440-M1 Nehalem-EX blade server
The B440-M1 blade can be configured with Xeon 7500s with four, six, or eight cores, but only supports 4GB and 8GB DIMMs. It has room for two mezzanine adapters (each with dual-port 10 Gigabit Ethernet links) for 40Gb/sec of fabric bandwidth back to the UCS switch. The blade has room for four 2.5-inch SAS or SATA disks and an integrated LSI Logic 2108 controller with RAID data protection and 1GB of write cache. The UCS 5108 chassis has from one to four 2,500 watt power supplies, which are rated at 92 per cent efficiency.
The rack model of Cisco's new Nehalem-EX machines is the C460-M1, and it comes in a 4U chassis that has room for 64 memory slots for a total of 512GB using 8GB memory sticks. As with the other Xeon 7500 machine, Cisco is only supporting 4GB and 8GB sticks right now. (In the spec sheet, Cisco says it is using Samsung's DDR3 memory.) The chassis has room for a dozen 2.5-inch SAS or SATA drives (in 10K or 15K RPM speeds), hot-swappable and mounted from the front. The server has eight PCI-Express peripheral slots on the mobo (four x8, three x4, and one x1 at the PCI 2.0 level) plus two PCI-Express 1.0 slots (x4) for legacy cards, with an eleventh slot dedicated to a LSI MegaRAID disk controller. The server has one Gigabit Ethernet and two 10 Gigabit Ethernet ports on the board.
The Cisco Systems C460-M1 Nehalem-EX rack server
Cisco says that the new Nehalem-EX machines will ship in the summer of this year. Pricing was not available at press time, and will no doubt not be divulged until the boxes are actually shipping.
Existing B-Series and C-Series machines from Cisco support Microsoft Windows Server 2003, 2008, and 2008 R2; Novell SUSE Linux Enterprise Server 10 SP3 and 11; Red Hat Enterprise Linux 4.8, 5.3, and 5.4; and Oracle Enterprise Linux 5.3 (on the B-Series only). To support Nehalem-EX processors, customers may have to move to the latest releases, depending on how far Xeon 7500 support is backcast into the releases. RHEL 5.5 and SLES 11 support the Xeon 7500s, and so does Windows 2008 R2.
This only leaves a couple of more questions. Will Cisco use the Boxboro chipset from Intel to make an eight-socket server based on the Xeon 7500s? The company could no doubt pack two of these into a UCS 5100 chassis, with a reasonable amount of memory, or could use its memory extender ASIC to put eight sockets and a truly large amount of memory - say 512GB or 1TB - up against those 64 cores. A 6U or 7U C-Series rack machine with eight processors and some memory extension might be able to have 2TB or 4TB, yielding a truly enterprise-class server in terms of cores and memory - one able to compete with all but the very largest systems out there. ®
Sponsored: RAID: End of an era?