Cisco revs up top-end Nexus switches with F3 chips to 100Gb/sec
More throughput and less power for end-of-row aggregators
Cisco has, like other switch makers, been gradually refreshing its product line to boost bandwidth from 10Gb/sec up to 40Gb/sec and 100Gb/sec speeds, and at the Cisco Live extravaganza today in Orlando, Florida, the networking trotted out a new top-end Nexus 7700 lineup that sports new ASICs and offers a more compact design than the current Nexus 7000s, which only sport 10Gb/sec ports.
Well, at least until today, that is. In addition to rolling out new modular, end-of-row Nexus 7700 switches, Cisco is also creating a new set of line cards for the Nexus 7000s to give them fatter pipes based on the same F3 series ASICs.
The company is also touting a new cloud-friendly set of tools that will help automate the provisioning and management of network resources, all under the marketing umbrella of Application Centric Infrastructure.
First, let's look at the shiny new Nexus 7700 iron. There are two new models of these modular monsters. The first is the Nexus 7710, which has ten slots in the front for I/O modules, plus two half-width supervisor slots in the center of the chassis. (You can't see the line cards or the supervisor cards them behind the closed doors in the picture below.)
The Nexus 7710 has an aggregate of 42Tb/sec of switching capacity across those eight line cards, and you can divvy it up in a number of ways. You can have 96 ports running at 100Gb/sec speeds, or 192 ports at 40Gb/sec. And if, for some reason, you want to load it up with 10Gb/sec line cards that are based on the early generation F2 ASICs used in the original Nexus 7000 modular switches, you can cram eight I/O modules in there with a total of 384 ports in the 14U rack-mounted chassis.
The Nexus 7700 modular switches pack more oomph in a smaller package
The Nexus 7718 is the bigger, badder modular switch, and it comes in a 26U enclosure that sports sixteen I/O module slots plus the two supervisor modules for managing the switch. With double the I/O modules, the Nexus 7718 can cram twice as many ports into slightly less than twice the space used by the Nexus 7710. That's a whopping 768 10Gb/sec ports, 384 40Gb/sec ports, and 192 100Gb/sec ports, to save you doing the math.
The new Nexus boxes sport front to back airflow, which is a customer requirement and now data centers don't have to mount the switches backwards in the racks or try to cool the switch from the hot aisles in the data center and dump hot air in the cool aisles meant to keep servers and storage from overheating.
Shashi Kiran, senior director of data center, cloud and open networking at Cisco, tells El Reg that the new Nexus 7700 series of modular switches have about 2.5 times the throughput of the Nexus 7000 machines they replace, and take about 33 per cent less rack space as well and significantly, the new F3 ASICs and chassis design burns 60 per cent less power.
These are some big numbers, and ones that the more than 8,000 customers who have bought more than 40,000 Nexus 7000 enclosures are going to take a hard look at.
The Nexus 7700 F3 24-port 40Gb/sec line card
The Nexus 7710 and 7718 modular switches will be available in July, and will run the NX-OS 6.2 network operating system. They will initially ship with a 48-port 10Gb/sec F2e I/O module. The Nexus 7710 enclosure costs $30,000 buck naked, with the supervisor modules costing $40,000 a pop and an AC power supply (one per line card) costs $3,000 in for AC power and $8,000 in DC power. The network fabric module costs $18,000 for the 7710 and $27,000 for the 7718. And finally, the 48-port 10Gb/sec line card will run you $40,000.
Two other I/O modules will ship in the fourth quarter. The 24-port 40Gb/sec F3 I/O module has 960Gb/sec of aggregate throughput and can process 1.44 billion packets per second of Layer2 and 3 forwarding.
With a fully loaded Nexus 7718 chassis, these cards can pack 30.7Tb/sec of switching throughput and chew through 23 billion packets per second. The 7718 chassis has been designed to deliver as much 83Tb/sec of switching capacity, and thus is future proof and can easily take 200Gb/sec line cards should they come to fruition.
The 12-port F3 100G line card for the Nexus 7700 switches
The F3 100G line card is aggregates the capacity on those ASICs and gives a dozen big fat pipes with 100Gb/sec of bandwidth and also revs those ASICs up a bit so they can process 1.8 billion packets per second of Layer 2/3 forwarding and deliver 1.2 Tb/sec of throughput. Cisco says that a Nexus 7718 loaded up with this grunting line cards has 28.8 billion packets per second of forwarding and 38.4Tb/sec of aggregate switching.
Both the 40Gb/sec and 100Gb/sec line cards support all the modern and necessary protocols for cloudy data centers, including Virtual Extensible LAN (VXLAN) and Locator/ID Separation Protocol (LISP). Customers can also use Overlay Transport Virtualization (OTV), Multiprotocol Label Switching (MPLS), and Virtual Private LAN Service (VPLS) to link their data centers to each other using Nexus 7700 switches. Data center linking is one big use of the Nexus 7000 series modular switches, and so is end-of-row aggregation.
Cisco expects to have these two F3 line cards for the Nexus 7700 switches available in the fourth quarter. Pricing was not available because it has not been set yet. A 12-port 40Gb/sec F3 I/O module that will slide into the existing Nexus 7000 series modular switches will also come out in the fourth quarter and Cisco will roll out a six-port 100Gb/sec F3 module early next year.
In conjunction with the new switches, Cisco's top brass are articulating a broader vision of network management that goes beyond software-defined networking and into what the company is calling Application Centric Infrastructure. (Heaven help us, another nebulous term to try to remember.)
The basic gist is that as Nexus switch fabrics get larger and larger and have multi-tenancy as well as scale, you need management tools that nonetheless think at the application level, not down in the gobbledygook of network devices because application performance, in the end, is what matters.
The first element of this Application Centric Infrastructure is a bit of software called Dynamic Fabric Automation, which can scale up to 10,000 tenants or networks, mobility for virtual and physical networks, and a distributed control plane to deliver that scalability as well as network resiliency.
This software, which is coming out later this year, will be able to span various kinds of physical and virtual switches and server virtualization hypervisors and has APIs so it can hook into existing system and network management tools as well as cloud controllers.
The idea is to provide a single policy framework for automating the setting up of networks, security, and application layers in the network, much as Cisco has done inside of its Unified Computing System modular systems. ®
Sponsored: Benefits from the lessons learned in HPC