This article is more than 1 year old

Cisco rolls up its own OpenStack distro

Cloud control freaking on UCS rackers, Nexus switches

Cisco's software engineers have rolled up a distro of the open source OpenStack cloud controller for its "California" Unified Computing System blade and rack servers and related Nexus converged switches.

Cisco Systems might be very tight with storage juggernaut EMC and its VMware server virtualization minion, but that doesn't mean that the networking giant and server wannabe can ignore other options in the market or pass up an opportunity to make a little dough itself – hence the new distro.

Announced in a blog post ahead of the OpenStack Design Summit, which is going on this week in San Diego, the distro is based on the current "Folsom" edition of the OpenStack code. While Essex, which came out in April, was arguably the first stable release of OpenStack that could be put into deployments, the Folsom release, which came out on time three weeks ago ahead of the design summit, has a number of key features such as support for the Quantum virtual network interface that lets OpenStack talk to virtual switches, and the Cinder block storage service, an alternative to the Swift object storage that is more suited to databases and certain applications.

The software is formally known as the Cisco Edition of OpenStack, which is not abbreviated CEOS but which might be channeling the message that the networking giant is thinking about having co-CEOs in the wake of John Chambers' retirement a few years hence.

As you can see from the release notes, this particular stack is designed to install on top of server nodes running Canonical's latest Ubuntu Server 12.01 LTS and use the KVM hypervisor championed by Red Hat and Canonical. Cisco says it is looking at wrapping up versions of the OpenStack distro to run on Red Hat Enterprise Linux or its CentOS clone.

The deployment on server nodes is done by Puppet, from PuppetLabs, which also packages up and deploys Nagios for system monitoring, Ganglia for cluster monitoring, and HA Proxy for load balancing.

Cisco also says that the active-active clustering setup it created for the Essex release will eventually be ported to its Folsom distro to do active-active clustering for key services nodes in an OpenStack cluster. When this will be available is not clear. This high-availability functionality comes from HA Proxy, kickstartd, and galera, all open source projects that are made to work in concert.

Cisco used the launch of its own OpenStack distro as an excuse to remind everyone that it has been contributing to the OpenStack project for the past year and a half, and said in the blog post that it "merged its own NaaS proposal with other vendor and provider blueprints to create the Quantum component of OpenStack." Nicira, the virtual networking upstart that VMware ate for $1.26bn before it even came out of stealth mode, usually gets most of the credit for the work done on the Quantum virtual networking features of OpenStack.

But Cisco wants to be clear that people understand that it has supplied the plug-ins so Nexus switches can talk to Quantum and therefore take their marching orders from the OpenStack control freak. This plug-in supports L2 segmentation over virtual LANs (VLANs) and works with the Open vSwitch virtual switch from Nicira/VMware, and it has a sub-plug-in (Cisco's words, not mine) that lets Quantum boss around Nexus switches.

Cisco also said that it worked on other parts of the Linux network stack as well as on the Horizon dashboard and Nova compute cluster to make virtual networking work better.

It is not clear what tech support Cisco is providing for this distribution, but presumably there is some and it has a price attached to it. You can run the Cisco Edition of OpenStack on any servers or switches if you want, but it is certified to run on Cisco UCS C2XX M3 series rack servers and Nexus 5500 series converged switches.

It's interesting that Cisco's B series blade servers are not the chosen platform, and that has to do with the storage limitations of the blades.

The reference architecture that the OpenStack distro is tuned for uses two socket C Series machines for both compute and storage nodes in the OpenStack cluster. The compute nodes use eight-core Xeon E5-2650 processors, which spin at 2GHz and which have the right balance of performance, price, and VM scalability to make them suitable for clouds.

The compute nodes are configured with 128GB of main memory, eight 600GB 10K RPM SAS drives, one Mega-RAID 9266i disk controller, and a single Cisco virtual interface card (VIC) that provides a two-port 10GbE interface up to the Nexus 5500 switch and that also has the VM-FEX hypervisor bypass feature that lets VMs talk directly to the VIC and thereby avoid the hypervisor overhead.

It looks like this is the C22 M3 server, which is a 1U rack server, although Cisco doesn't say that.

The storage nodes in the Cisco reference architecture use cheaper four-core E5-2609 chips that run at 2.4GHz, and the machines only have 32GB of memory with the same RAID and network controller. The difference is that this node is based on the 2U C24 M3 machine, which has room for 24 SAS or SATA drives lined up vertically across the front of the chassis. In this case, the storage nodes in the OpenStack cluster have two dozen 1TB SATA disks spinning at a mere 7200 RPM.

The suggested rack configuration cooked up by Cisco engineers has two Nexus 5548-UP switches, 15 compute nodes, three nodes that can switch-hit as Nova compute control or compute nodes, three storage proxy nodes, three Cinder block storage nodes, and five Swift object storage nodes. That leaves 2U of rack space left over for expansion. ®

More about

TIP US OFF

Send us news


Other stories you might like