Original URL: http://www.theregister.co.uk/2009/05/27/cisco_eats_ucs_dogfood/

Cisco touts self as Unified Computer System pioneer

Plans California's own data center

By Timothy Prickett Morgan

Posted in Data Centre, 27th May 2009 23:33 GMT

Cisco Systems has been talking about its Unified Computing System, code-named California, so much for months now that it is hard to remember sometimes this mega product is still not shipping.

Turns out Cisco is its own first beta test customer, and the company's IT brass Wednesday was talking about its own plans for deployment.

The forum was CiscoTV - I kid you not - hosted by John Manville, vice president of IT at Cisco's network and data center services organization, and his boss, Chris Hynes, senior director of IT for the NDCS unit.

In case you have been on holiday or in a coma for the past two months, Cisco's California setup offers converged networking and server operations. That means Fibre Channel over 10 Gigabit Ethernet for linking servers to storage and 10 Gigabit Ethernet for linking servers to each other and to the outside world, and blade-style x64 servers for running applications based on Intel's Nehalem family of processors.

Toss in some vSphere virtualization compliments of VMware, some Cisco switch software running virtualized on ESX Server 4.0 VMs, and some BMC Software system management tools, shake well, and you have a California system. You get the specs of the California systems here, a general economic argument Cisco later made for California systems here, Intel's related Nehalem EP processor launch here, and VMware's vSphere 4.0 launch here.

Manville said the IT department's goal is always to be Cisco's "first and best customer," and to run the networking and now server gear in production environments as quickly as possible, with the aim of shaking out bugs and giving input on needed features before a product ships to real customers.

Cisco had a UCS setup running in its Mountain View, California, site before the launch of the product in March, and is planning to roll out California boxes as part of its normal x64 server upgrade cycle during the next two and a half years. By the end of that time frame all of its x64 instances will be running on its own California gear.

Cisco would not say how many x64 servers it has spread across its 52 data centers and server rooms worldwide, comprising 14 production and customer-facing data centers and 38 product development data centers and server rooms. All told, these data centers take up 215,000 square feet of space and burn more than 20 megawatts of juice.

About 30 per cent of the servers in these data centers are virtualized, and Manville said that the goal was to get somewhere between 70 and 80 per cent of the servers virtualized over the next few years. Cisco did not talk about the number of servers it has - I asked and was ignored - and the company did not talk about what other gear it must have supporting key manufacturing and ERP systems aside from x64 iron.

Many to many

Cisco has 300 locations in 90 countries and over 65,000 employees that work from 400 buildings. It's a big company and it is hard to believe that it doesn't have 10,000 to 20,000 servers, considering that it has 400 telepresence systems worldwide to run its company as well as the WebEx Web conferencing business unit to support.

By way of comparison, before it started its own data center consolidation effort in July 2005, Hewlett-Packard had 25,000 servers in 85 different data centers. After the compression of its data centers down to three mirrored facilities, HP cut the number of physical servers running its operations down to 14,000 and crammed them into a total of 342,000 square feet. HP doesn't have a WebEx business, but it does do application hosting.

Since March, the UCS setup in the Mountain View data center at Cisco has been used to run applications supporting John Chambers, the company's chairman and chief executive officer, as well as the Cisco news site. That machine also supports the legal and finance applications used by those respective departments. It is a small start, but that is what a beta test is.

Rather than rip and replace, Cisco already had Nexus 7000 switches - which have the converged Fibre Channel over Ethernet fabric - in place in the data center, so it slide the California blade servers in without using their variant of these switches.

The UCS box is supporting virtual desktops in the Mountain View officer as well as various virtual machines to host those legal and financial applications. And, it is even running a unvirtualized instance of an Oracle database that runs behind the financial and legal applications, demonstrating that California is not just for virtualized workloads.

Hynes said Cisco is also building a new 10,000 square foot data center that will sport UCS gear from the getgo. The exact configuration of the data center is yet to be determined, and Cisco did not divulge its location, but considering that it will be a one megawatt facility, it won't be that hard to find it.

To demonstrate the benefits of its Unified Fabric networking and Unified Computing System approaches, Hynes walked through some comparisons of how this data center might be built using traditional rack servers from a few years ago, then converged networking, and then California systems.

Hidden costs

With a traditional design, Cisco figures it could get 135 racks of blade servers into the 10,000 square foot data center, and that the servers and storage in that data center would require 4,320 Fibre Channel cables and 2,160 copper cables for networking. Of the one megawatts of power allocated to the data center, 247 kilowatts would end up going to storage, 186 kilowatts would go to the data center network, and another 79 kilowatts would go to other networking equipment. This would leave only 488 kilowatts left over for the servers.

By moving to Unified Fabric switches, Cisco can cut the rack count to 72, cut Fibre Channel links down to 1,008, and cut Ethernet links down to 300. When you take into account the power savings from the unified switches, that would leave 634 kilowatts available for the servers - about 30 per cent more than with traditional cabling and switching.

You may laugh at the cabling savings, but on such a project, lashing together the blade servers to their switches and storage - including cabling, patch cords, and labor - would run to $2.7m, but the simplified Unified Fabric approach would only cost $1.6m - a 40 per cent savings.

Now, add in the California blades. How many virtual machines can Cisco cram into this 10,000 square foot data center? With a traditional blade server approach, Cisco reckons it could cram 720 servers into this space and it could get between 7,000 and 7,500 virtual machines on those blade servers.

Cisco said to call it a 10 to one virtual compression ratio, so make it 7,200 machines. With Unified Fabric and the power savings, Cisco believes it can get somewhere between 930 and 1,080 servers into that data center, which would yield 9,300 to 10,800 VMs in the space.

Now, shift to the California blades. The increased efficiency of the blade servers made by Cisco - and the Nehalem processors from Intel - would allow somewhere between 1,200 and 1,400 blade servers to be packed into that new data center, which yields 12,000 to 14,000 VMs.

And when the memory expansion technology ASIC for the California systems is ready later this year, allowing more memory per blade than standard Nehalems allow - I have heard 384GB instead of the 192GB on regular Nehalems - then the number of VMs will double again, to 24,000 to 28,000 VMs.

Numbers game

Look at the virtualization effect: 720 unvirtualized, bare metal blade servers translates into as much as 28,000 virtual machines, if Cisco's math works out in the real world. We'll be watching to see how this turns out, just like Cisco's potential customers and its new competitors will be as well.

While Cisco was not specific about its plans for rolling out UCS throughout its other data centers, Manville did say UCS will be the backbone of Cisco's internal cloud computing effort, dubbed Cities, which is short for Cisco IT Infrastructure Elastic Services.

Manville said that the company's goal was to deploy as many applications as made sense on an internal cloud and then create a hybrid cloud that mixes internal Cisco resources and external public cloud resources.

And you can bet Cisco is hoping, with the kinds of numbers it's talking about, that the cloud providers on the other side of its firewall and linked to its own Cities cloud will be using Cisco's UCS gear.

Bootnote: Cisco contacted El Reg after reading this story and said that it has 13,866 physical servers across its data centers. It did not provide a breakdown of the servers by type, but it is fair to assume that most of them are x86 and x64 machines. ®