This article is more than 1 year old

Cisco outs third gen UCS blades and racks

California dreaming

When Intel launched its "transformational" "Nehalem-EP" Xeon 5500 processors for two-socket boxes three years ago, Cisco Systems picked that moment to jump into the server racket and got pole position on the announcements. It's three years on now and while Cisco is not yet a tier-one server maker, it has built a business with a $1.3bn annual run rate and now 11,000 customers and has to be taken seriously. And, it can lay back a day or two and wait for the roar to die down about a new Intel processor to talk about its new machines that make use of it.

The third generation of "California" Unified Computing System blade and rack servers, which debut today, are of course based on Intel's much-awaited "Jaketown" Xeon E5-2600 processors, also designed for two-socket systems and also variously known as the "Sandy Bridge-EP" processors by the chip maker.

Servers based on the Xeon E5-2600 make use of the "Patsburg" C600 chipset to implement one of three different "Romley" server platforms that Intel is expected to launch this year. "Sandy Bridge-EN" Xeon E5-2400s are expected in somewhat pruned (and presumably less expensive) server platforms and a four-socket variant of Jaketown known as the Xeon E5-4600 is also due this year sometime. Because of the three-pronged processor attack, Intel has to carve up the server business slightly differently. Meanwhile Cisco, like Oracle, is a pure-Xeon server manufacturer these days and will no doubt be enthusiastic about all of the new Xeon E5s.

Cisco Server roadmap 2011

Cisco's 2012 server plans: More to come (click to enlarge)

If you are thinking that Cisco might be tempted to adopt "Interlagos" Opteron 6200 processors or get excited about the possibilities now that Advanced Micro Devices has acquired microserver and system interconnect upstart SeaMicro, don't get too excited.

"We're continually evaluating our processors options," Todd Brannon, marketing manager for unified computing at Cisco – and the former market development manager for Dell's Data Center Solutions (DCS) bespoke server business for hyperscale data center customers – tells El Reg. That doesn't sound hopeful for AMD, which is focused on the core enterprise, data center market, and not any supercomputing or hyperscale cloud workloads that AMD might chase with Opterons and SeaMicro interconnects.

For now, Cisco doesn't want to talk about all that and is sticking to the rollout of three new servers and a new interconnect fabric, a new I/O chassis module, and a new virtual interface card adapter for the UCS iron.

Carving up market share with blades

Cisco cuts its way into the server biz with a blade server, and it is no surprise that it is coming out swinging with a third generation blade based on the Xeon E5-2600 chips.

Cisco UCS B200 M3 blade server

Cisco's third generation B Series blade server

The B200 M3 blade is a half-width blade server that slides horizontally into the UCS 5108 chassis, which has 6U of rack height and which holds eight half-width blades or four full-width blades. There's no chassis change with the Xeon E5 server launch, in fact Brannon said that Cisco thinks it can get to the end of this decade with this current chassis still being supported. (That doesn't mean there won't be an additional chassis at some point, mind you. It also doesn't mean there will be another chassis, either.)

The B200 M3 blade supports the Xeon E5-2600s with four, six, or eight processor cores and supports up to 384GB using regular, registered DDR3 memory sticks in 16GB capacities. The Cisco spec sheets do not say it supports LR-DIMM memory, but the presentation I have seen says the box does support 768GB, and that means 32GB sticks are coming – and for all the other vendors I have spoken to, getting to the full 768GB capacity has meant using LR-DIMMs.

What I can tell you is that Cisco has not used its own Nuova memory extension ASIC, used on some of the existing B Series blades and C Series rack servers, to boost memory capacity by as much as a factor of 2.7. Satinder Sethi, vice president of Cisco's Server Access & Virtualization Technology Group, said that none of the three Xeon E5-2600 machines launched today use the Nuova memory-stretcher ASIC.

Cisco UCS B200 M3 blade server, internal

Internals of the B200 M3 blade server (click to enlarge)

The B200 M3 server has two slots for hot-plug storage, which can be SAS disks of 146GB, 300GB, or 600GB capacities or SSDs with 100GB or 200GB capacities; it has an LSI SAS 2004 disk controller with mirroring support for the drives if you want to do that. The server also has two slots for SD flash memory cards, which comes in 16GB capacities. The blade has two mezzanine I/O slots, one of which is used by the virtual interface card, or VIC, that extends the converged switch fabric from the UCS 6100 or 6200 fabric interconnects through the I/O chassis module and into the blade itself in a completely virtualized manner. Along with the new B200 M3 blade, Cisco is rolling out a new VIC 1240 adapter, which is a new modular virtual adapter.

Cisco UCS VIC 1240 adapter

Cisco's new UCS 1240 virtual interface card adapter schematic (click to enlarge)

With the VIC 1240, two 20Gb/sec links come in from the two I/O chassis modules, usually called Fabric A and Fabric B in the Cisco lingo, as shown in the schematic above. These four 10 Gigabit Ethernet ports can be carved up into 256 programmable virtual interfaces, which can be virtual Ethernet network interfaces or virtual Fibre Channel storage links that are actually Fibre Channel running over converged enhanced Ethernet. The idea is to use the two fabrics as primary and secondary paths for each virtual Ethernet or FC port on the LAN-on-motherboard (LOM) card. (Well, that's what it is, even if it is heavily virtualized.) If you snap an adapter card into the VIC 1240, you can double its capacity to two fabrics with four 10Gb/sec pipes on each of the two fabrics, for a total of 80Gb/sec of bandwidth.

This new VIC 1240 card also supports the Virtual Machine Fabric Extender, or VM-FEX, feature, which debuted in free-standing Nexus converged switches last March and which takes the logical networking embodied in the UCS machine and extends it out into the server hypervisor. VM-FEX was announced to work with VMware's ESXi hypervisor, and last fall Cisco said that it would be extending support for VM-FEX and its Nexus 1000V virtual switch (a key component in the UCS system) to Hyper-V 3.0 sometime in 2012.

Cisco did not, for some reason, launch a full-width B Series blade server with two sockets and sporting that memory expansion ASIC, developed by Nuova Systems, which was acquired by Cisco a few years back to flesh out the California systems. (Although we didn't know it at the time.)

During the Xeon 5500 generation, a two-socket server tapped out at 144GB across 18 memory slots, but the Nuova ASIC let Cisco push that up to 384GB. Of course, with today's Xeon E5-2600s, using standard DDR3 DIMMs, servers can run memory across their 24 slots at full 1.6GHz speed, and if you want to move to load-reduced LR-DIMM DDR3 memory, you can push that up to 768GB across two sockets. Brannon was mum about Cisco's plans with regard to a fat memory system or even a wider blade, but clearly it could push memory quite far – maybe as far as 2TB based on past ratios – using the Nuova ASIC.

Rack 'em up

In the past year, Cisco has been ramping up its C Series rack server business because some customers need more memory, storage, or I/O capacity than is available in its blade servers, and has told El Reg that it expects its rack machines to help fuel its growth this year and for the company to get something closer to the market at large in its distribution of blade and rack machines over the long haul. Blades represent a little less than a quarter of server shipments in a quarter, but Cisco is still mostly selling blade machines. But now that the integrated switching and benefits of the UCS chassis have been extended to C Series rack machines, the rack servers can be peers to the blades in a single compute cluster and now we can expect their uptake to accelerate.

There are two new C Series blades, one with fairly limited disk capacity and the other with a lot more.

The C220 M3 takes up 1U of space in the rack and supports eleven of the sixteen Xeon E5-2600 processors. This includes the top-bin and hottest E52690, which has eight cores spinning at 2.9GHz and which is rated at 135 watts, as well as the two low-volt, low-watt parts – the E5-2650L and E5-2630L – which respectively have eight cores at 1.8GHz and six cores at 2GHz and which burn at 70 and 60 watts. The machine only has 16 memory slots, not the full 24 slots that the Xeon E5 chip controller supports, and thus tops out at 256GB of main memory using 16GB sticks. The server has eight hot-plug, 2.5-inch drive bays that can be filled with 7.2K RPM SATA, 10K RPM SAS, and 15K RPM SAS drives. It has the optional pair of 16GB SD flash drives as well and support for 100GB SATA SSD drives, too. Cisco has a variety of mezzanine and PCI-Express RAID disk controllers that plug into the C220 M3.

Cisco UCS C220 M3 rack server

Cisco's UCS C220 M3 rack server (click to enlarge)

The B Series blades has all of its I/O virtualized – which is kinda the whole point – but the C Series machines allow customers to plug in other peripherals, including optional storage controllers. The C220 M3 server has two PCI-Express 3.0 slots, one half-height x8 and one full-height x16, for peripheral expansion. The server hooks back into the UCS switch and manager through the P81E VIC card, which eats that x8 slot. The virtual interface card functions similarly to the VIC 1240, but is limited to two 10GE pipes coming down off one of the fabrics in the UCS switch, which can be carved up to as many as 128 virtual Ethernet or Fibre Channel interfaces. Presumably if you want redundant fabrics and links, you can put two of these cards into a C Series server if it has two x8 slots. This P81E card is still only a PCI-Express 2.0 card.

If you need more storage in your rack server, or you want those redundant links to the UCS switch, then Cisco will probably suggest the new C240 M3 server. It is a two-socket box, of course, and it supports the same eleven processors that the C220 M3 does.

Cisco UCS C240 M3 rack server

Cisco's UCS C240 M3 fatter rack server (click to enlarge)

The C240 M3 supports the full 24 memory slots of the Xeon E5-2600 design and therefore supports 384GB of memory max using regular 16GB sticks and 768GB using LR-DIMMs. The machine has five PCI-Express 3.0 slots: one full-height x16, two full-height x8, and two half-height x8. The machine can use an LSI 2008 SAS RAID mezz card or an LSI MegaRAID SAS9266-8i PCI-based card. (The spec sheet doesn't say so, but these are probably the same storage options the C220 M3 rack server has.) The C240 M3 has 24 drive bays in front, and offers the same disk options as its skinnier rack sibling and the two 16GB SD flash units.

Operating system options were not detailed on the spec sheets for the three new machines, but Cisco has been an ardent supporter of Windows and Linux and most of its customers use one of these two platforms. Cisco is taking orders for the new machines today, and will ship them in March. The B200 M3 has an entry list price of $3,654, with the C220 M3 starting at $2,306 and the C240 M3 beginning at $2,455 for base configurations.

Cisco UCS 6296UP fabric

The 6296UP interconnect: Double your ports, double your server count

In addition to the new servers and VIC adapter card, Cisco has also rolled out the 6296UP fabric interconnect and 2204XP chassis I/O module. The 6296UP is twice as tall as the 6248UP it augments and has 96 ports in 2U of space instead of 48 ports in 1U. At 1.96Tb/sec, the 6296UP interconnect has four times the bandwidth of the original UCS 6100 and twice as much as the 6248UP. The port-to-port hop (and in a UCS system, therefore the server-to-server jump) is under 2 microseconds, which is 40 per cent lower than the original interconnect.

The new 2204XP chassis I/O module is one of two such devices that plug into the UCS chassis and it is what, in fact, makes the chassis and its blades stateless devices that can be managed by the UCS 6100 and 6200 switches and management controllers. This one has four ports that link the chassis to the interconnect, for 40Gb/sec total and 80Gb/sec across the pair. The 2208XP has eight ports, and therefore can move a total of 160Gb/sec in and out of the chassis across its pair.

One last thing: Sethi tells El Reg that while Cisco is not increasing the size of the UCS Manager management domain beyond its original and current 320 server nodes, it is working on a tool called Multi-UCS Manager that will aggregate multiple UCS Manager domains and allow service profiles for blades and racks to move across domains as workloads move around a much larger set of machines. The target design for the Multi-UCS Manager uber-tool is to span up to 10,000 server nodes, but it is unclear how far this will be certified when it ships sometimes in the second half of this year. ®

More about

More about

More about

TIP US OFF

Send us news


Other stories you might like