Original URL: https://www.theregister.com/2013/02/04/cisco_nexus_6000_switches_sdn_one/

Cisco revs up Nexus switches to 40GE with fresh ASICs

Other tweaks, SDN promises, and a VPN tunnel for control freaking public clouds

By Timothy Prickett Morgan

Posted in Networks, 4th February 2013 18:24 GMT

Cisco has vowed to push 40GE Ethernet switches into the mainstream while also improving its 10GE/40GE Nexus boxes.

The transition to 10GE networking is well under way, and the convergence of server and storage traffic onto switches continues apace, albeit at a slower rate than Cisco Systems had hoped.

Despite this, the networking juggernaut hopes 40GE will be an easy sell because it has found itself competing against 40Gb/sec and 56Gb/sec InfiniBand switches from Mellanox Technologies, and now Intel as well through its acquisition of QLogic, the InfiniBand switch and adapter business.

But perhaps more importantly, Cisco wants to get on the front-end of 40GE rollouts because it likes to brag about jumping on inflection points in markets. And the move to Fibre Channel over Ethernet (FCoE) and boosting the speed of its homegrown switch ASICs to 40GE speeds allows Cisco to brag about two things at the same time.

The new Nexus 6000 series of switches launched on Monday are not Cisco's first foray into 40GE switching gear. Indeed, last February Cisco put a 40GE module in its end-of-row Nexus 7000 switches, with optional 100GE uplinks if you needed seriously fat pipes. Many of its 10GE switches have four uplink ports running at 40GE speeds – such as the Nexus 3064 announced in March 2011 and its follow-on Nexus 3064-X in February 2012.

Cisco really took latency down a few notches last September with the Nexus 3548 switch aimed at high-frequency traders, but this 48-port fixed switch only ran at 10GE speeds. It did, however, have a Cisco-designed ASIC code-named "Monticello" that had all kinds of neat tricks to lower port-to-port latencies and to help with data streaming workloads, plus a whole new set of optimizations that Cisco calls Algo Boost.

The multi-speed transmission inside the ASIC allows for normal, warp, and warp SPAN modes that make use of the 960Gb/sec of aggregate switching bandwidth and 720 million packets per second forwarding rate of the Monticello ASIC in different ways.

In normal mode, you can get 250 nanosecond latencies on those port hops, and if you switch to warp mode to consolidate forwarding onto the Monticello ASIC from other chips, you reduce the number of MAC addresses, IPv4 unicast and multicast routes, and IPv4 hosts by about 20 per cent or so to 190 nanoseconds. And in warp switch port analyzer (SPAN) mode, where one port is feeding data to multiple ports (think of multicasting inbound market data to different systems inside of a hedge fund) the Monticello ASIC can do this with latencies as low as 50 nanoseconds. The only trouble with the Nexus 3548 is that it is still using 10GE ports.

Cisco brags about all of the inflections it has been ahead on in networking

Cisco brags about all of the networking inflections in which it has been ahead

Enter the new Nexus 6000 series of switches, which are a revamped set of top-of-rack switches that are based on derivatives of the Monticello ASICs and which are tuned for 10GE and 40GE workloads instead of the Gigabit and 10GE workloads that their predecessors, the Nexus 5000 series, were tuned to run.

El Reg asked Craig Huitema, vice president of marketing for data center and cloud networking at Cisco, to give the code names and properties of the derivatives of the Monticello ASICs, but the company is a bit skittish about giving out too much competitive information about the secret sauce in the chips. (Annoying, isn't it?) All that Huitema would say is that there are a mix of chips used inside of the new Nexus 6001 and 6004 switches that give them their oomph.

The Nexus 6001 has 48 ports running at 10GE speeds plus four 40GE uplinks that can be split into 16 more 10GE ports if you want to go that route. It comes in a 1U rack-mounted chassis, and with 1.28Tb/sec of switching bandwidth, it seems to be sporting a peppier version of the Monticello ASIC.

The Nexus 6001 switch from Cisco

The Nexus 6001 switch from Cisco

The Nexus 6001 switch also has a slightly different buffer design, with 25MB of packet buffer memory shared by every dozen 10GE ports and 25MB used for every three 40GE ports. 16MB is used for inbound packets and 9MB is used for outbound data on those port groupings.

These large buffers allow for the support of up to 32,000 multicast routes, and help the Nexus 6001 cope better with bursty traffic, according to Cisco. It can handle up to 256,000 MAC addresses, up to 4,000 VLANs, and up to 4,000 access control lists. The Nexus 6001 has a latency of approximately 1 microsecond on port hops using cut-through forwarding, which Cisco says gives predictable latency regardless of packet sizes, traffic patterns, or features active on the 10GE and 40GE ports.

Like the Nexus 5000s, the Nexus 6000s can be used with the Nexus 2000 fabric extenders to stretch the network out into servers and their hypervisors over the Layer 2 transport, and in the case of the Nexus 6001, you can link to up to 24 FEX units per switch.

The Nexus 6001 switch will ship sometime in the first half of this year, and Cisco is not providing pricing on it. Presumably not all of those Algo Boost features that are on the Nexus 3548 switch from last year are turned on in the Nexus 6001, and similarly the latencies, while good, are not as low. And so we will guess that the Nexus 6001 will have a lower price tag than the Nexus 3548, which cost $41,000 – or between $640 and $854 per 10GE port, depending on how you carve up those 40GE uplinks.

The real new switch in the Cisco lineup, and one that has some overlap with the Nexus 5000 and Nexus 6001 top-of-rack units and the Nexus 7000 end-of-row aggregation and core switches, is the shiny new Nexus 6004. This is a big ol' 4U switch that can provider 96 ports running at 40GE speeds or 384 ports running at 10GE speeds for Layer 2 and 3 of the network stack.

The big bad Nexus 6004 40GE fixed-port switch

The big bad Nexus 6004 40GE fixed-port switch

As you can see, the Nexus 6004 has eight columns of 40GE ports for a total of 24 or 48 ports in the left side of the chassis, and then has another four modules, each with a dozen more 40GE ports, to expand that out by another 48 ports in baby steps.

Why doesn't Cisco just make the machine with modular port cards? Well, for one thing, it would not be considered a fixed-port switch, but a modular one, and for another, you might not buy a base machine with 48 ports installed from the get-go. Cisco has to get a base amount of money from the sale for the numbers to work.

If you want to use this Nexus 6004 beast as a 10GE switch, you get the cable splitters and voilà, you turn each 40GE port into four 10GE ports. This way, if you are just moving to 10GE now, you have a switch that with a change of cabling can run at 40GE speeds when you need that.

The Nexus 6004 has 7.68Tb/sec of switching bandwidth, which suggests that there are either eight of the Monticello chips running at the same speed as in the Nexus 3548 or six running at the speed of the Monticello chip used in the Nexus 6001. The bandwidth coming out of the Nexus 6004 is six times greater than that of the Nexus 6001, so it looks like there are six Monticello chips running at the higher speed in the Nexus 6004.

The buffer configurations on the Nexus 6004 are the same as on the Nexus 6001 – 25MB for every three 40GE ports – and the port-to-port latency is the same 1 microsecond or so as on the Nexus 6001. Basically, the Nexus 6004 should have been called the Nexus 6006 because it is really six switches crammed into one, but was called the 6004 because it takes up four times the space as the 6001.

None of that is particularly important. What is, says Huitema, is the fact that with Nexus 2200 fabric extenders and virtual interface cards, a fully loaded Nexus 6004 switch can handle up to 75,000 virtual machines flitting around on a cluster of servers. And with 256,000 MAC addresses and 8,000 multicast routes through the switch, that is three times the port density, twice the MAC addresses, and four times the multicast table depth as competitive switches from Juniper Networks, Arista Networks, and Dell/Force 10.

How long these advantages hold remains to be seen. Probably not long, knowing the switching racket.

The Nexus 6004 switch is available now. You can buy a base machine with 24 ports for $90,000, which works out to $3,750 per 40GE port or $938 per 10GE port if you use line splitters. A base machine with 48 ports costs $195,000, or $4,063 per 40GE port, and that price is a little higher for reasons that Cisco did not explain.

The 12-port line card expansion module costs $40,000, or $3,333 per 40GE port. Fully loaded, a 96-port Nexus 6004 would cost $355,000, or just under $3,700 per 40GE port, which works out to $925 per 10GE port if you use the cable splitters. It's not clear why you can't buy a 24-port version and put six of the line card expanders in there and save yourself $15,000, but if it works like that, we can't think of a good reason to buy the 48-port version at all.

Other stuff gets some tweaks

Cisco also made a bunch of related announcements on Monday in the Nexus product line.

First, there will be a 40GE expansion module for the Nexus 5500 switches, which is expected to be available sometime in the first half of this year. Pricing will be available when it comes out.

Cisco also announced a new Nexus 2248PQ 10GE switch with 40GE uplinks, which is available now and costs $12,000. The feeds and speeds of this box were not available at press time.

As promised, Cisco is launching its first "services blade" for the Nexus 7000 core and aggregation switch. This particular one is the network analysis module, or NAM, and as the name suggests it will bring application awareness and performance analysis for Layers 4 through 7 in the network stack to the Nexus 7000 switches.

The NAM will be available as a plug-in card called the NAM-NX1 sometime in the first half of this year, and will also be available in a format compatible with VMware's ESXi hypervisor and deployed in a virtual machine as the vNAM. Pricing for the hard and soft versions of the NAM will not be given out until it ships. vNAM will enter customer trials in the second quarter.

Another new feature is called Nexus 1000V InterCloud – but before you jump to conclusions (like I did), this is not a funky version of the Nexus 1000V virtual switch created by Cisco that runs out on public clouds and is somehow magically synchronized with Nexus 1000V virtual switches running in your data center. Such a thing might be useful, in theory, but the Nexus 1000V is not the preferred virtual switch out there on the public clouds.

Open vSwitch and the embedded switches inside of ESXi and Hyper-V are popular, when virty switches are required, and heaven only knows what Amazon is using in conjunction with its homegrown variant of the Xen hypervisor that underpins its EC2 compute cloud.

The Nexus 1000v InterCloud tunnel out to private clouds

The Nexus 1000V InterCloud tunnel out to private clouds

What the Nexus 1000V InterCloud program is, however, is a hypervisor-agnostic virtual private networking tunnel that works with Cisco's physical Nexus switches and its Nexus 1000V and other virtual switches running with hypervisors on servers to link out to Layer 2 virtual private networks running on clouds to bring them into the same management domain as the internal cloudy networks.

This InterCloud tunnel has what Huitema called enterprise-grade cryptography and firewalling within the cloud and over the pipe out to the cloud from your data center. So all of the VLANs and policies that your network admins have set up for internal networks can now be extended out to the networks on the public cloud.

A tool called the Virtual Network Management Center, or VNMC, gives that single pane of management to the internal and external networks. You use Cisco's Virtual Security Gateway to encrypt it and Cisco's Adaptive Security Appliance to give it firewall services.

The InterCloud VPN tunnel and VNMC software will be available – wait for it – sometime in the first half of 2013, and pricing will be announced at availability.

Block diagram of the Cisco ONE software-defined networking strategy

Block diagram of the Cisco ONE software-defined networking strategy

On the software-defined networking front, Cisco is cooking up a strategy and a set of products that it introduced back in June 2012 called the Open Network Environment, or ONE. It provided some milestones for the ONE effort along with the new Nexus 6000 switches.

The ONE architecture includes supporting OpenFlow and other unnamed SDN standards in the control and forwarding planes of switches and routers, but it is more than that. It also includes management and orchestration services, network services above OpenFlow and transport services below OpenFlow, all available through a common set of APIs that will be exposed in all switches and routers running in Cisco's IOS, IOS-XR, and NX-OS operating systems.

The ONE development kit, called onePK (with the capitalization all messed up on purpose by Cisco), will expose these APIs to both network admins and network application developers.

Here's how the ONE effort is coming along. The ONE Controller, which will implement OpenFlow controller protocols, and the Nexus 3000 switches with OpenFlow support, will be available – again, wait for it – during the first half of 2013. Catalyst 3000 and 6500 switches, Nexus 7000 switches, and ASR 9000 routers are in customer trials or soon will be during the first half of this year. The openPK development tools are being tested on ISR G2 and ASR 1000 routers and Nexus 3000 switches now, and will be available in the first half, and will be entering testing on ASR 9000 routers and Nexus 7000 switches also in the first half.

In addition, Cisco is working on support for the Nexus 1000V virtual switch for the Hyper-V hypervisor from Microsoft during the first half of this year, and says that support for the KVM hypervisor from Red Hat is in proof-of-concept with no publicly available timeline for commercial delivery. Support the VXLAN gateway method of uniting public and private clouds, or multiple distinct data centers, is also due in the first half for the Nexus 1000V virtual switch. ®