Life after Cisco: I've got 99 problems but a switch ain't one
Dell vs Supermicro - Trev smacks that switch up
Test lab Pending network upgrades have reignited an old debate: what exactly makes a switch "good enough?" I have the opportunity to give two switches a truly thorough battering; my lab contains a Dell PowerConnect 8132F and a Supermicro SSE-X24S. Try as I might, I can't find fault with either unit.
Both are 24-port switches, with the Dell being able to add a variety of ports via an expansion module. The Dell can add 2x 40GbE QSFP ports or up to 8x additional 10GbE ports via breakout cables. The one in my lab has 4x 10Gbase-T ports populating the expansion module, allowing me to test 10Gbase-T, SFP+ and optical (via an Intel FTLX8571D3BCV-I3 transceiver) network cards against the switch.
The switches were provided by the respective manufacturers as a way to test the functionality of their related switch families. The network cards were provided by Intel for same (I'll be looking at the NICs and "converged versus non-converged" in future articles).
I have enjoyed over a decade of success with simple unmanaged switches – or basic "smart" switches – in most deployments. (Sacrilegious, I know.) In those deployments where I have had cause to deploy proper managed switches, I've found units from Juniper, Netgear and D-Link adequate to the basic Layer-3 tasks I've had to accomplish. My next upgrade requires more switch-level features than I've ever deployed before, as well as 10GbE.
For many, only Cisco will do; the debate is merely which device with which firmware. I've been working with these switches for some time now and I'm not convinced that Cisco über alles applies any more.
I need a switch with a few basic features. Redundant, hot swappable PSUs and a dedicated management port are a must. Downtime is bad, and the UPSes can be flaky in some locations. I also need that management port to guard against "Trevor misconfigured the switch and locked himself out" moments. IPv4 routing is a given, but IPv6 routing is a must for any new gear I deploy.
Jumbo Frames, QoS, Spanning Tree, Link Aggregation, Port Mirroring, VLANs and multicast snooping that doesn't suck are all also on the list. There are going to be many of these switches in play, so having to change everything with a GUI is out; I need to be able to script changes via a CLI. I also need my switches to have decent security - the ability to use RADIUS at least - but they both offer TACACS+ as well.
The switches meet the above requirements with ease. Indeed, the CLI they offer is so similar to IOS that I was able to take to it right away. I wasn't expecting that. In my experience, multicast has traditionally been a weak point in LAN switches, yet both of these units held up against every type of multicast traffic I could throw at them.
There is a bit of a difference on multicast support; the Dell switch implicitly supports MLD snooping, and PIM-SSM which the Supermicro does not. Both switches support IGMP snooping, PIM-SM, PIM-DM and PIM-SMv6; good enough for everything I'll run over them.
Power consumption on the units is better than expected. The Dell 8100 series seems to have the ability to power down inactive ports, and switch ports with short cables – all of the SFP+ cables I am using fall into that category – to lower power consumption. We measured it consuming 107W with no connected systems, 131W when populated and 142W under load. Supermicro claims its switch consumes 176W; we measured 113W with no connected systems and 125W when fully populated and 145W under load. We have not managed to fully load either switch yet.
Thermal performance of the switches is good. Both switches advertise the ability to run at high ambient temperature; 40°C for the Supermicro and 50°C for the Dell. I was encouraged by Supermicro to test the thermal performance of their equipment, and so I locked both of the units in a closet with the heater and a thermometer.
It took about 15 minutes to go from 45°C ambient to 55°C ambient. Both switches were just fine at this point for a while; however the Dell did shut down after eight minutes at 55°C. I added another heater and drove the temperatures all the way up to 65°C it was at this point that the Supermicro shut down. Even stuffed in the back of a rack in a server room running at 30 degrees ambient, both of these switches should meet requirements.
Apples and oranges
The Dell has a deeper features list than the Supermicro, which makes sense given the differences in their market targeting. The Dell is designed to stack with other switches, so it has things like support for more link aggregation groups than is supported by a standalone item like the Supermicro. It also has iSCSI auto-config for Dell Equalogic or Compellant arrays - basically some QoS setups that make life easier if you buy into the wider Dell ecosystem.
The Dell has a "USB rapid deployment" option that I rather like; create a config and place the relevant firmware on a USB stick and you can deploy the config to the switch at boot. No having to set the device up in a lab first. That's important, as you would otherwise have to connect via console to the switch to enable remote access. The Supermicro comes pre-enabled for remote access.
The Dell costs a little bit more and for that you get expanded features and that nice modular expansion port which allows you to use 40 gig uplinks to connect to other switches in the stack. The Dell switches obviously want to compete on a slightly higher level - they are aiming for Cisco. Indeed, the Dell switches even support Cisco Discovery Protocol.
Supermicro has a datacentre-friendly reverse airflow model of the SSE-X24S known as the SSE-X24SR that is apparently quite popular. (Switches are often installed at the back of a rack; in these configurations – especially where the hot/cold aisle paradigm is in use – "normal" airflow switches would cause problems.)
They also offer bigger brothers to these switches – the SSE-X3348S and SSE-X3348SR – that are 48-port 10GbE switches with 4x 40Gbit QSFP ports. Dell's PowerConnect 8100 family also comes in 48-port flavours.
The Supermicro has the edge on price. The Dell switch sells for $8,069 with small business support, with $12,880 enterprise support. The Supermicro is regularly $8,472, but is currently doing that "promotional rebate" thing and is rather easilyfound for under $7,500. Supermicro claims support levels at these prices aimed at taking a significant chunk out of the enterprise switching market.
I would really like to be able to test the switches more. I've flooded them with as much traffic as I can generate. I've set up iSCSI LUNs, QoS and run tests with iometer, sqlio, vdbench, crystal disk mark, hdtune pro and passmark's performance test while hammering it with other traffic. Neither switch gave me the slightest bit of grief.
As my final test, I simulated a few hundred VoIP calls while streaming several multicast IPTV streams and storage vMotioning 50 VMs while they were in the process of running OS-level image-based backups. The switches held up.
I am going to build a ØMQ test lab to test for jitter at the smallest detectable increments. I will use a real time kernel setup to do so. It's the last thing I can think of to try; but there must be more tests we can try.
As it stands today, I can only say that I find both these switches excellent. Both Dell and Supermicro are to be commended on making great gear. The switches provided have proven to work beyond the specifications on the tin, leading me to believe that the detractors of all things non-Cisco are simply full of hot air.
This is where you come in, dear reader. If I can reasonably run requested tests with the equipment in my lab, I will include the results in the follow-up article along with the ØMQ runs. What tests should I run against the switches and what benchmarks and tool runs do you want to see? Answers in the comments below. ®
Sponsored: Virtualization security practical guide