Original URL: https://www.theregister.com/2009/11/10/cisco_bigger_servers/

Cisco keeps schtum on big iron plans

More SMP scale and Opterons required

By Timothy Prickett Morgan

Posted in Channel, 10th November 2009 16:14 GMT

Comment There is no question that networking giant Cisco Systems wants to be a player in the server racket, and its recent Acadia partnership with EMC to sell preconfigured Vblock setups, complete with Cisco blade servers and networking, VMware server virtualization, and EMC storage and system management tools, drives the message home. But it is going to take far more than some clever two-socket Intel server designs for Cisco to be a real player.

Cisco may have been talking about its servers since March, ahead of Intel's "Nehalem EP" Xeon 5500 processor launch in fact, but the company is only now ramping up shipments of its servers. The Xeon 5500s are used in the B Series blade servers that are part of Cisco's "California" Unified Computing System (which were debuted in mid-March as well as in the C-Series rack servers, which Cisco announced in June.

The entry B200 blades started shipping in July, and the high-end B250 with Cisco's home-grown memory expansion electronics (which allows the two-socket Xeon 5500 box to support 384 GB of main memory compared to the 144 GB limit in a regular Xeon 5500 box) started shipping in early October. At that time, Cisco announced the feeds, speeds, and prices for its C200, C210, and C250 rack servers (the latter with the memory extension ASIC) and said the machines would start shipping in November, meeting its goal of having the products out the factory door by the fourth quarter.

It is a pretty good start for a newbie server maker. Cisco has reasonably innovative server designs, a partnership to push the boxes, and a sales channel that is trying to figure out how to work with Cisco to peddle its products and perhaps make a higher margin than might be possible with other products. It also has deep enough pockets to make mistakes and still stay in the server game - provided it doesn't keep trying to buy everything in sight.

Paul Durzan, director of product management for the Server Access and Virtualization Group, says that Cisco is not done adapting its server lineup. While Cisco has never admitted this, I got the distinct impression that rack-mounted servers were not part of the original California plan, but some key potential customers explained to Cisco that they preferred rack servers for a variety of reasons, including the desire to have local storage options and more peripheral expansion. "There are a group of people who only want rack mounts," admits Durzan.

If Cisco did indeed adapt its California strategy, this could make it a lot easier to sell servers. As El Reg previously reported, in the second half of 2010, Cisco will allow the C Series rack servers to plug into the UCS 6100 switch, a Nexus-style converged switch that supports server and storage traffic over the same 10 Gigabit Ethernet backbone (using the Fibre Channel over Ethernet protocol for storage). This will allow the same integrated management tools used in the California blade servers - one of the key selling points in the UCS setup - to be extended out to the C Series racks.

The fact that this is not already the case suggests that C Series racks were an afterthought, but again, Cisco is mum on the subject. Cisco could have been thinking that rack server customers would be more inclined to buy top-of-rack Nexus 5000 switches and end-of-row Nexus 7000 switches and use other management tools.

What is clear is that Cisco now expects the C Series rack machines to significantly grease the skids for its server business. "The idea behind unified computing has clear caught on," says Durzan. "The reception has been incredible, and we have definitely captured mindshare and already have customers in production. The Cs are an easier sell, and rack servers represent a larger market."

Cisco believes that the memory extender electronics in the B250 blade server and C250 rack server is a key differentiator, and for any given amount of capacity, Cisco can deliver it for a quarter to a third the price because its machines can use lower density and lower priced DDR3 main memory to get a certain level of capacity. And, if customers need to get more memory in their workloads, Cisco can deploy enough 8 GB DIMMs to support 384 GB on a two-socket box - something no other vendor can do yet.

Durzan doesn't expect a lot of customers to be interested in 8 GB DIMMs at this point, because of the high price - we're talking $60,720 for 384 GB in a B250 or C250 server - but maybe by the end of 2010 or in early 2011, as 16 GB DIMMs come out and 8 GB DIMMs start coming down in price, denser memory will be an option. For now, 4 GB is really the practical economic limit, and even still, Cisco can deliver 192 GB of memory on these machines for $10,992, compared to a cost of $30,510 for regular Xeon 5500 servers that have to use 8 GB DIMMs to reach 144 GB of capacity.

So, will IBM, Hewlett-Packard, Dell or Fujitsu cook up their own memory extension electronics? "I wouldn't be surprised if other companies try to copy what we have done," says Durzan with a laugh. "But the truth is, this should have been done long ago."

Cisco has blades and racks, memory extension, integrated switching and management, and the volume leading server virtualization platform (VMware's vSphere 4.0). So what else does Cisco need to be a server player? Bigger servers, and maybe smaller ones, too.

The classic mistake that all server vendors make at some point in their product lines is to not provide enough of a ceiling for the product line to support the workloads customers want to plop on the boxes. IBM woefully underpowered its System/38 minis in the early 1980s, and customers hit ceilings in the early 1990s with its AS/400 minis. When IBM's mainframes shifted to CMOS engines in the early 1990s, customers were left in the lurch and Hitachi mopped up for a few years selling its Skyline mainframes, which were based on the older bipolar chip technology that was used by all the mainframe makers to create their engines.

X86 servers were all the rage in the 1980s, but efficient and affordable symmetric multiprocessing did not come to the platform until the late 1990s, limiting their appeal to print and file and relatively modest application serving. There are many more examples of customers hitting the performance ceiling on a platform.

Cisco's B blades and C racks are two socket machines, and they are limited to whatever chips Intel can deliver. And here were are, on the verge of Intel's announcement of the eight core "Nehalem EX" processor, and Cisco is not saying anything about its plans for bigger iron. The "Boxboro-EX" chipset will apparently be able to connect four or eight sockets together gluelessly, delivering 32 or 64 cores and 512 GB or 1 TB of main memory. This is pretty big iron, and the kind of a ceiling that Cisco's current and future customers will want to see.

Durzan was quiet as the grass growing when El Reg suggested pretty strongly that the company would be stupid not to launch a Nehalem EX box. Whatever Cisco's plans might be, the company ain't saying. But if Cisco likes profits, then bigger boxes better be part of the plan.

Given the history of chip launches and lags at both Intel and Advanced Micro Devices, Cisco would also be wise to have a set of Xeon blades and racks and another set of Opteron blades and racks. In the first quarter of 2010, AMD is going to take a leap over the Nehalem line and deliver the "Magny-Cours" Opterons and their G34 Opteron 6000 platforms, packing eight or 12 cores into a socket and lots of DDR3 memory capacity and bandwidth.

Cisco, as the newbie server maker, can't afford to pull a Dell and remain loyal to Intel only when it comes to x64 chips. Durzan had no comment whatsoever on the possibility of using Opterons in Cisco racks and blades. ®