This article is more than 1 year old

Red Hat and Novell duke it out in real time

There's money in them thar milliseconds

Low mean latency

According to the STAC report, this setup tested had the lowest mean latency of any RMDS setup tested yet, with less than 1 millisecond of end-to-end infrastructure latency at the 700,000 update per second throughput level. (This was on an RMDS benchmark test that was optimized to reduce latency, not one that was meant to show maximum throughput).

On the PSPS Producer 50/50 fanout test - which is the extreme throughput workload in the Reuters test suite that assumes most users have common data feeds that have to be updated all at once from the backend systems Enterprise MRG was able to cope with 110,000 inbound updates per second on the RMDS workload and juggle 5.56 million outbound updates per second. With the 9000 byte MTU limit, the six blades were able to handle 140,000 inbound updates per second and push 7.07 million outbound updates per second on the workload.

A few days later, Novell announced its own results on the RMDS tests, also done by STAC. And instead of just kicking out a single number to compare it to other Linux and Unix platforms also tested using the RMDS benchmarks, Novell went one better and did four iterations of the test: two using SLERT 10 and two using the regular SLES 10. Each was configured first with Gigabit Ethernet links and then InfiniBand links. This allows prospective customers to see the benefit of using SLERT over SLES and also the effect of faster networking.

Novell ran its RMDS tests at the STAC lab on Hewlett-Packard BladeSystem blade servers, specifically on four BL460c blades, which are two-socket machines using quad-core Xeon X5450 processors running at 3 GHz, each configured with 16 GB of main memory. Each blade had two integrated Gigabit Ethernet ports and also had a dual-port 4x InfiniBand mezzanine adapter. The machines were configured first with SLES 10 SP2 and then with SLERT 10 SP2 Update 3, in each case tested with Gigabit and then InfiniBand interconnect.

With the Gigabit Ethernet links between devices running the RMDS code, SLES 10 was able to process 200,000 updates per second with under 1 millisecond of mean latency, but switching to SLERT 10 pushed it up to 500,000 updates per second. Moving from Gigabit Ethernet to InfiniBand interconnect, the SLES 10 setup was able to process 600,000 updates per second while keeping the mean latency under 1 millisecond for RDMS transactions. Adding SLERT to the InfiniBand setup did not boost performance of low-latency transactions by much, though, with it rising to only 750,000 updates per second.

On the P2PS Producer 50/50 fanout test, however, InfiniBand had a dramatic effect, with SLERT 10 being able to push 10.1 million updates per second outbound and SLES 10 hitting 9.34 million updates per second. On the Gigabit Ethernet network, both SLES 10 and SLERT 10 topped out at 1.3 million updates per second. The two takeaways from the much smarter Novell tests are that InfiniBand can make a huge difference in raw throughput on the RMDS workload, and that SLERT plus InfiniBand managed to edge out Red Hat's Enterprise MRG in the latency-optimized portions of the test.

Incidentally, the Novell-HP setup made use of Voltaire's InfiniBand adapters as well as its messaging accelerator software, which was announced in June of this year and which was designed specifically to improve the performance of InfiniBand networks in situations where it is being used in multicasting applications. Voltaire claims that this software, which runs in conjunction with Linux, can reduce latency on such applications by as much as 50 per cent. ®

More about

TIP US OFF

Send us news


Other stories you might like