But since the rtt_fair Flent test uses unlimited bitrate, I gave that a shot to see what would happen. All other traffic parameters stayed the same.
Average OFDMA on/off latency for downlink traffic isn't very promising. While latency is lower than with 50 Mbps per STA, the change with OFDMA enabled is in the wrong direction.
Pal6 AP - Average Latency CDF comparison - unlimited throughput - downlink
Uplink has higher latency in both cases, but it at least improves with OFDMA enabled.
Pal6 AP - Average Latency CDF comparison - unlimited throughput - uplink
Aggregate downlink throughput also moves in the wrong direction when OFDMA is enabled.
Pal6 AP - Average aggregate throughput comparison - unlimited throughput - downlink
As before with limited bitrate, Uplink throughput shows essentially no gain from OFDMA. But variation is much greater.
Pal6 AP - Average aggregate throughput comparison - unlimited throughput - uplink
Finally, downlink channel congestion again moves in the wrong direction with OFDMA enabled, more than doubling.
Pal6 AP - Channel congestion comparison - unlimited throughput - downlink
At least uplink channel congestion doesn't increase as much as downlink does with OFDMA enabled.
Pal6 AP - Channel congestion comparison - unlimited throughput - uplink
I also ran tests setting the per STA bitrate to 5 Mbps and 100 Mbps and saw some benchmarks improve and others degrade with OFDMA enabled. So it appears that this benchmark is not as robust as I'd hoped.
Among Flent's benchmarks are tests that use different traffic priorities, which is a good idea. Theoretically, traffic marked with voice and video should have lower latency than lower priority "best effort" traffic. But will OFDMA help or hurt traffic prioritization?
To find out, I assigned each of the four traffic pairs a different traffic priority. One STA got Best Effort CS0 (the default), one got "Excellent Effort" CS3, the third was tagged CS5 Video and the fourth got CS6 Voice.
Another thing Flent RRUL tests generally do not do is limit throughput. I already tried this earlier by setting unlimited bitrate. But setting buffer length also has the effect of capping throughput. So for the next test results, bitrate (iperf3 -b) was unlimited and buffer length (iperf3 -l) was unset.
The traffic priority key in the plots below is:
- Ping Pair 1: CS0 (Best Effort)
- Ping Pair 2: CS3 (Excellent Effort)
- Ping Pair 3: CS5 (Video)
- Ping Pair 4: CS6 (Voice)
With the channel now fully loaded (congestion = ~ 90 - 95%), latencies increase. But the OFDMA off, downlink plot shows a spread in latencies for the different traffic priorities. Oddly, the lowest priority Best Effort STA has the lowest latency and the highest priority Voice STA has the highest.
Pal6 AP - Latency per STA - downlink - OFDMA offEnabling OFDMA doesn't produce a dramatic latency reduction. Instead, all STAs move in the wrong direction and the Video STA (Ping Pair 3) overtakes voice to win the prize for worst latency.
Pal6 AP - Latency per STA - downlink - OFDMA on
OFDMA Off uplink results are even more interesting. The two lower priority CS0 and CS3 STAs clearly have much lower latencies and lower spread in latencies (the steeper the curve, the lower the value spread) than the "higher" priority CS5 and CS6 STAs.
Pal6 AP - Latency per STA - uplink - OFDMA offEnabling OFDMA slightly reduces latency for the lower priority STAS, but increases latency for Voice and Video.
Pal6 AP - Latency per STA - uplink - OFDMA on
For reference, here's a downlink OFDMA off plot with a 50 Mbps bitrate and 256 byte buffer length. Channel congestion in this case averaged around 25%. Even with the horizontal scale expanded, you'd be hard-pressed to see a significant difference in latencies.
Pal6 AP - Latency per STA - downlink - OFDMA off - 50 Mbps bitrate - 256 byte buffer length
I think you can see why I'm well on the way to putting the latest and greatest hype that the Wi-Fi industry has foisted upon us into the same bin as MU-MIMO. Like MU-MIMO, which is also a part of 802.11ax and having some effect on these results, OFDMA is great in theory, but proving extremely difficult to implement in practice.
I can't say I'm surprised; airtime management is a hellaciously difficult task. And OFDMA only makes it more difficult. For each packet that hits an AP looking to be sent to a waiting STA, the AP must chose among:
- putting it right into the transmit queue
- holding it for aggregation with other packets for that STA
- holding it, with or without aggregation, to be sent with other STA traffic via MU-MIMO
- holding it, with or without aggregation to be sent with other STA traffic via OFDMA
These decisions must be made taking into account the effects of client type mix (a/b/g/n/ac/ax), signal level and physical position and traffic type mix (voice/video/browsing/file transfer/IoT) and load. Like MU-MIMO, OFDMA is likely to take years to get to the point where it actually adds value.
In the meantime, I'm still trying to figure out what to do about OFDMA testing and will present results in Part 2. I've run some version of the benchmarks described here on a group of AX routers that I know have OFDMA enabled. And so far, depending on which benchmark settings used, I've seen airtime congestion and latency get worse with OFDMA enabled. If anyone out there has any bright ideas, please share!