Force10 cranks rack and core switches to 40GbE
There's no such thing as too much bandwidth
Force10 Networks said back in the spring that it wanted to be in the pole position as the Ethernet switch racket ratcheted up to 40 gigabits per second. On Tuesday, the company makes good on that promise.
The new top-of-rack and core Ethernet switches buzzing along at 40GbE speeds should make bandwidth-constrained and latency-crazed customers happy, while also warming the hearts of a Wall Street that wants to make some dough on Force10 when it finally pulls the trigger and goes public. Back in March, the company announced it would be doing an initial public offering.
Force10 kicked out two 40GbE switches on Tuesday, one top-of-racker and another a line card for its core switch. Like everyone else in the switch business that doesn't make their own ASICs, Force10 is being cagey about who its chip suppliers are. Fulcrum Microsystems, Broadcom, and Cisco Systems make their own Ethernet chips, as do Mellanox Technologies and QLogic make their own InfiniBand ASICs. Mellanox made the news last week when server and switch wannabe Oracle took a 10.2 per cent stake in the company, which supplies the database and application software giant with the chips used in its Sun-branded InfiniBand switches.
The S-Series S4810 is the top-of-rack switch running at 40GbE speeds. The 1U rack-mounted device has 48 10GbE SFP+ ports that can step down to Gigabit Ethernet if your servers need that, plus four QSFP+ ports running at 40GbE speeds.
The switch has 1.28Tb/sec of full-duplex, non-blocking bandwidth, and features a cut-through switching architecture that radically cuts down on the port-to-port hop latency. The S-Series S55 low-latency switch that Force10 announced in the summer was rated at 5 microseconds, and this puppy comes in at a very low 650 nanoseconds.
The Force10 Networks S4810 10/40 Gigabit Ethernet switch
Cut-through switching reads the header data on the incoming packet and starts setting up the transfer to the destination before the remaining data in the packet comes into the switch. This means that when the data in the packet is finally all received, the switch can rip it out fast.
"This is going to make the HPC guys think about how long they can continue to support InfiniBand," says Ken Won, director of product marketing at Force10.
Force10 supplies the switches used to lash together the server nodes in the 1.1 petaflops "Roadrunner" hybrid Opteron-Cell blade server at Los Alamos National Laboratory and the 825.5 teraflops "Jugene" BlueGene/P super at Forschungszentrum Juelich (FZJ) in Germany. IBM and Dell are both the biggest resellers of Force10's switches among the top-tier server makers.
The S4810 offers line-rate speeds across all ports, including the 40GbE uplink ports, and have redundant power supplies. The S4810 supports Force10's VirtualStack stacking, allowing for up to a dozen of these switches to be lashed together and managed as a single domain.
The S4810 will be the first switch from Force10 that supports data-center bridging, which provides the lossless characteristics of Fibre Channel storage and InfiniBand protocols to Ethernet. The switch has front-to-back or back-to-front airflow, which lets you better manage your hot and cold aisles in the data center and not have to turn your switches around.
The Force10 40 GE line card for ExaScale core switches.
The S4810 runs at 220 watts, which is a bit lower than the 330 watts that IBM is getting with its newly acquired RackSwitch G8264 10/40 GE switch that it got through its completed acquisition of Blade Network Technology last week. (You can read about that G8264 switch, which was launched two weeks ago, here.) It looks like Force10 and Blade Network are using the same ASIC, but they have different software and packaging.
The S4810 will be available this month. Pricing was not announced.
The other new switch that's coming out from Force10 is a line card for the ExaScale core switches that has been optimized so it can take the high-bandwidth of the S4810 top-of-rack switch without choking on the bits. The line card has six ports — two CFP and four QSPF+ — but the card doesn't have enough oomph to have all six running at 40GbE speeds.
Any two of the four QSPF+ ports can run at that 40GbE line speed, but if you use three or four of the QSFP+ ports, you are oversubscribed and you won't get the same latency. The half-rack ExaScale chassis supports 56 ports running in oversubscribed mode or 28 ports running full-out at 40GbE speeds in a half-rack of space.
When you marry multiple S4810 rack switches with one or more ExaScale core switches with the 40 GE modules, you can create a logically flat L2 switch fabric with very low latency. Like Blade Network, Force10 is also selling optical breakout cables that can split a 40GbE link into four 10GbE cables that can in turn plug into 10GbE ports on core switches. This allows you to invest in 40GbE uplinks today, even before you upgrade to 40GbE core switches to stage your 40GbE rollout.
The big question is when will 10GbE become normal on the servers of the world, and therefore when will 40GbE uplinks and 40GbE core switches more necessary. Force10 thinks the transition is coming, and faster than the glacial move to Gigabit Ethernet a decade ago. Dell'Oro Group, which tracks the switch market, expects the number of 10 Gigabit Ethernet ports on top-of-rack switches will increase from 325,000 ports in 2009 to 5.5 million ports in 2012.
"Next year, we expect to see 10 Gigabit ports on the server motherboards, and that will drive 40 Gigabit switch volumes next year," says Won. "We see a lot of people going from Gigabit in the server and 10 Gigabit in the rack to 10 Gigabit in the server and 40 Gigabit in the rack. And we will be riding that volume price curve down."
Force10 did not announce pricing for the ExaScale 40GbE line card, which will start shipping some time in the first half of 2011. ®
Sponsored: The Nuts and Bolts of Ransomware in 2016