Cisco pitches new world server order
The California School of Economics
If you had to sum up the sales pitch that Cisco Systems is polishing up as it prepares to deliver its "California" Unified Computer System, it would look something like this: California will save you money, even if we charge a premium for server capacity.
Cisco, of course, has not admitted that it will in fact charge a premium for its B Series server blades, but it is hard to imagine that they will not be expensive considering the amount of stuff Cisco has packed onto them and the fact that Cisco will be a low-volume provider of blade servers (at least for a couple of years).
Like some other venerable systems - I am thinking here of IBM's AS/400 minicomputer, the touchstone of integration - Cisco appears to be pitching the total cost of ownership (TCO) benefits of an integrated system as a means to get people to pay a premium. But because this is 2009 and not 1988, there is also a consolidation factor at play that actually will save customers money, according to Cisco's own analysis.
The consolidation of networks is, as you would expect, a big piece of the California system story. Here's the basic idea, graphically, from Cisco:
You basically put the server and network management out right where you'd expect Cisco to put it, in the top-of-the-rack in the UCS 6100 fabric interconnect switch. This is a variant of Cisco's Nexus 5000 switch, which combines Ethernet network traffic with Fibre Channel over Ethernet (FCoE) to link out to storage. Add in some optimized virtualization (and the Nexus 1000V virtual switch, which is called VN-Link sometimes), and then you rip out a whole bunch of cables, switches, adapters, and server management co-processors. Cisco says this means there is half as much support infrastructure to support blade servers the way that current generations of blades require.
The money adds up pretty fast according to Cisco. In a presentation that the company is sharing with prospects, Cisco put together some numbers on a 320-blade setup, which is the maximum size that a single California system spans with a 40-port fabric interconnect (that's 40 chassis with a maximum of eight half-width blades). Take a gander:
Using a "legacy" blade server, the setup cost $21m, including servers, chassis, switches, and so forth, (No storage, apparently, except possibly local storage on the blades). That blade server setup required 31 racks and 3,520 cables to get the blades connected to switches and storage, and it burned $800,000 on power and cooling over the course of three years.
Cisco says it can deliver a similar 320-server California system for $12m - a 43 per cent reduction on capital expenditures - and cut the power and cooling bill over three years by 19 per cent to $650,000. The California system takes up only 12 racks of space and uses only 480 cables. That is a huge reduction in cabling, more than the reduction that commercial blade server makers say they can bring compared to rack servers and their external switches.
California versus Plain Vanilla
The savings that Cisco is touting as it compares the California system to plain vanilla rack servers is quite large. To illustrate the point, Cisco ginned up some numbers for a 1,000-server setup. The company did not provide the details of this rack server setup, except to say that it put all of the networking equipment at the end of the row. That's how it is generally done in data centers, and it requires a huge amount of cabling. Here's how the TCO numbers stack up:
With a 1,000-server setup, including networking for servers and storage, the TCO comes to $7.36m using normal rack servers and switches. Now, shifting to Cisco's Nexus 5000 unified fabric switches (which predate the California system) drops the TCO by 23.1 per cent to $5.66m. Now, going all the way with Cisco and dropping in a complete California system (well, three full ones and then fourth with 40 more blades in 5 chassis) has a TCO of only $3.27m. That's a reduction of 55.5 per cent on the TCO, and that is big money. The savings are greater than the cost of the California system.
Presumably, these numbers include fair comparisons for the hardware, power, and cooling. And if virtualization and system management are included in the California box, then it has to be added to the rack servers. As we all know, it is very expensive to buy VMware's stack of software for rack servers. The software often costs as more than the physical iron it runs on. Based on the average price of $7,356 per server in this presentation, I would guess that Cisco is comparing 1U, two-socket servers with VMware's basic ESX Server stack.
But again, the company is not providing details about this comparison yet. If this is the case, then Cisco, which has a 2 per cent stake in VMware, is getting a great OEM deal on the bits of the future vSphere hypervisor and virtualization tools it is licensing, probably in return for help on creating the Nexus 1000V virtual switch.
It will be interesting to see the details behind these comparisons when California is generally available in June or perhaps July if it slips. More generally, Cisco is telling prospects that the machine can reduce capital expenditures by up to 20 per cent and operating expenses by up to 30 per cent, and then adding that it has already seen higher numbers among the few beta testers it has. These are clearly the kinds of numbers that bean counters and CEOs like to see.
And these are the numbers that Cisco needs to show if it wants to chase the $85bn in market opportunity that it calculates is out there for global hardware, software, networking, and services sales that relates to data centers, of which the company thinks about $20bn can be attacked with the California system. But CEOs and bean counters have to be careful enough to drill down into the marketing and see exactly what is being compared to what and then do their own comparisons. Because even if they can save money using California gear, Cisco is asking them to spend money and abandon their current way of doing things. That's why Cisco is only going to be targeting new, virtualized, x64 applications at first. That's the low-hanging fruit. ®