My performance testing was quick and dirty, since a true switch performance test would require a Spirent SmartBits test platform or its equivalent, which I don't have access to. I used two computers with 2.4 GHz Pentium 4 and AMD Athlon 3000+ processors, both running Windows XP SP2. Yes, I know that the Windows TCP/IP stack isn't the fastest, but it's what I had handy. (Pummel me in the comments if that's what you enjoy...)
The gigabit adapters in each machine were Intel PRO/1000 MT Desktop PCI cards with 18.104.22.168 drivers downloaded from Intel's support website. I tried both 4 and 9K jumbo frame settings, but settled on using 4K, since it seemed to work best with my setup. Short (under 6') CAT 5e cables were used to connect each computer to the switch.
I ran separate transmit and receive tests using IxChariot, with the console and Endpoint 1 running on the P4 machine. So the data directions are with respect to the Intel system. I used the IxChariot high_performance_throughput script, running 1 minute tests in each direction using TCP/IP. This script uses a 10,000,000 byte filesize and 65,535 byte send and receive buffers and blasts data as fast as it can.
Figures 1 and 2 plot the average transmit and receive throughputs reported by IxChariot for the six switches. They also include reference runs using the same two computers connected via a 100 Mbps switch and the same two CAT 5e cables attached to each machine connected via a generic, i.e. not bandwidth or Category rated, RJ45 inline coupler. The Gigabit Straight cable runs were done without jumbo frames and with 4K jumbo.
Figure 1: Transmit Performance Comparison
Figure 1 clearly shows three things:
- There is a significant difference (~20%) between not using jumbo frames and 4K jumbo frames
- There is little difference among switches
- Best-case % throughput loss is significantly less for 100 Mbps than gigabit Ethernet (7% vs. 30%)
Table 2 summarizes the percent of throughput loss for each switch and the no jumbo frame case, using the straight cable, 4k jumbo frame test as a reference. All transmit results are within 5%, which I think can safely be said to be within measurement resolution.
|Product||Transmit (Mbps)||% Loss|
|Gigabit Straight cable -no jumbo||551.7||20%|
|Gigabit Straight cable - 4k jumbo||692.5||-|
Table 2: Transmit throughput relative loss
Similar conclusions can be reached using Figure 2 and Table 3, which summarize receive test results.
Figure 2: Receive Performance Comparison
Receive loss is a bit of a puzzle, since the percentage throughput reduction for the Belkin and Linksys switches is twice that of the other switches. But I'm reluctant to attribute the loss to anything other than measurement error, since both Vitesse and Broadcom-based products show higher losses.
|Product||Receive (Mbps)||% Loss|
|Gigabit Straight cable -no jumbo||578.6||23%|
|Gigabit Straight cable -4k jumbo||755.8||-|
Table 3: Receive throughput relative loss
So there you have it. It's nice to see that manufacturers have gotten the word that jumbo frames count, even if only for bragging rights and to make a potential sale-killer go away. Every one of these switches automatically supports jumbo frames up to 9K, although it would be nice if Linksys and Belkin added this information to product boxes and the information posted on their websites.
Note that when consider the throughput loss data you have seen, your mileage may vary for jumbo frame performance improvement and actual throughput obtained. Pumping bits along at gigabit speeds demands much more from your OS, computer bus architecture (and speed!) and TCP/IP stack. So those of you with speedier machines than mine might see faster speeds and lower throughput loss.
But what won't make a difference is the switch you choose. You can buy any of these products and feel confident that they aren't a throughput bottleneck.