California: Cisco gives out some details, finally
Just don't call it a blade server
Yesterday's launch of the California blade system by Cisco Systems was a little short on the feeds, speeds, and pricing information. But if you want someone to buy a funky new data center gadget, you have to be a little more specific, and luckily Cisco has some server people who understand this.
Dante Malagrino, director of engineering at Cisco's Server Access Business Unit and a proud poppa involved in the creation of the California system, was keen on talking to El Reg about more details of the system. But, because some elements of the system are not yet announced, there were some limits on what he could say.
Take a look at this pretty picture before we pull the California system apart:
The Cisco Systems 'California' Unified Computing System
Let's start with the blade server chassis, the UCS 5100, which is a 6U form factor that mounts in a standard computer rack. Rather than mount its blades vertically, Cisco is doing so horizontally. There will be half-width blades and full width blades, both of which are generically known as the UCS B Series blade servers. (What happened to the A Series, you ask? My guess is they used earlier "Harpertown" Xeon chips from Intel and were the alpha designs for the systems.) The 5100 chassis will hold up to eight half-width servers or four full-width servers.
It is a fair guess - and Cisco isn't saying - that both blades use custom motherboards, since the memory expansion ASIC that the formerly independent Nuova Systems created, and which will, according to Malagrino, allow up to four times the maximum main memory per server that standard Nehalem machines will allow, has to be wired between the processor and the memory subsystems in the QuickPath Interconnect scheme.
But it could be that one blade (the full-width one) has memory expansion and the half-width blade does not, or that the full width-blade offers 4X memory and the half-width one offers 2X memory compared to standard boards. (I would guess the latter.) Either way, the B Series blade servers have to have enough room to put up to four times as many DDR3 memory slots on the motherboard compared to regular Nehalem motherboards.
How much memory could we be talking about? Let's take a look again at the Nehalem mobos from Super Micro, which we told you about last November. The X8DA3 mobo is a two-socket board based on Intel's "Tylersburg" IOH-36D chipset, and it has two pairs of six DIMMs and will support a maximum of 96 GB of main memory; DDR3 memory runs at 1.3 GHz, 1.07 GHz, and 800 MHz. A fatter X8DTN+ mobo is based on the same Tylersburg chipset and has nine DIMM slots for each processor socket, for a total of 18 DIMMs and a maximum capacity of 144 GB.
If Cisco can deliver blade servers that support 384 GB or 576 GB of main memory for two sockets, this California box will be a screamer on virtualized workloads. Then again, if Cisco can boost main memory, so can other server makers, either by themselves (as Cisco has done) or through partner ships with MetaRAM or Violin Systems, just to name two memory innovators.
Incidentally, Cisco could have chosen to use MetaRAM memory, which takes cheaper low capacity chips and its own ASICs to make fatter memory modules that plug into standard slots and that cost less than modules using fatter DRAM. The fact that Cisco has launched California ahead of the Nehalem launch, expected on March 31, only means it wanted to get to talk first and stall companies that might be getting ready to buy servers from Hewlett-Packard, Dell, IBM, and Sun Microsystems, among others. Each of these companies could roll out their own memory extenders, and I personally hope they do.
One more thing: I am hearing that the B Series blades will top out at 384 GB, but Cisco has not confirmed this.
The B Series blades can be run in a stateless mode, with no local disk storage, according to Malagrino, but a lot of workloads require local storage and therefore Cisco will add disk drive and flash memory to the B Series blades. Looking at this picture, it appears that the B Series blades support two 2.5-inch disk slots as and maybe a DVD drive. The blades also have network adapters that are put on mezzanine cards, which I presume will be at least one (and possibly two) 10 Gigabit Ethernet ports, but Cisco has not said. What the company has said is that it has three different versions of network adapters for the blades, which are optimized for virtualization, compatibility with existing drivers, or high-performance Ethernet links.
The UCS 5100 chassis also holds the USC 2100 fabric extenders, and up to two of these can be put into the chassis. These extenders, which are part of the Cisco secret sauce that makes the California system different from other commercial blade servers, links the blades to the network fabric interconnect, which is called the UCS 6100. Each fabric extender has up to four 10 Gigabit Ethernet links between the blades and the fabric interconnect.
The UCS 6100 fabric interconnect is how the various chasses in the California system and racks of machines are linked together and to outside storage. This UCS 6100 box is not, as the press has reported, a Nexus 5000 switch, which provides Fibre Channel over Ethernet as well as 10 Gigabit Ethernet switching. Malagrino says it is a riff on the Nexus 5000, but that it includes more memory and runs a lot more software. For example, the UCS Manager system management program, which takes care of all the iron, is embedded in this top-of-rack switch.
The expandability of the California system is, for the moment, limited by the size and bandwidth of the UCS 6100 fabric interconnects. There are two variants of the box: one that has 20 ports and another that has 40 ports. Each chassis gets a single 10 Gigabit Ethernet link into the fabric interconnect, so the top-end California box will be able to put up to 40 chasses in a single management domain, for a total of 320 servers or 2,560 cores using quad-core Nehalem EP processors.
If and when Cisco hears that customers need more expandability than this, Cisco can adopt 40 Gigabit or 100 Gigabit Ethernet to make a much larger system. And the BladeLogic operating system and application management software that Cisco has OEMed from BMC Software to build the California box will allow administrators to manage multiple California systems from a single pane of glass until then.
Malagrino provided a little more detail on when the California system will be available. The current target is towards the end of the second quarter. But this being the server business, I would not be surprised to see that slip into July as the bugs get shaken out of the system. ®