Like every other website on the planet, SmallNetBuilder uses cookies. Our cookies track login status, but we only allow admins to log in anyway, so those don't apply to you. Any other cookies you pick up during your visit come from advertisers, which we don't control.
If you continue to use the site, you agree to tolerate our use of cookies. Thank you!

Router Charts

Click for Router Charts

Router Ranker

Click for Router Ranker

NAS Charts

Click for NAS Charts

NAS Ranker

Click for NAS Ranker

More Tools

Click for More Tools

Wireless Reviews

Introduction

Updated 5 April 2018: Zyxel retested

Test floorplan

Test floorplan

In our first round of testing, each access point was thoroughly disassembled, dissected, and put through its paces in SmallNetBuilder's octoScope test chamber. My approach is different. I set each access point up in a real physical environment designed to resemble real-world deployments, and hit them with workloads similarly tailored after real-world protocols. My goal is to expose the strengths and weaknesses of each access point when used under more real life conditions.

For performance tests, one AP is set up in the location shown on the floorplan above. Then four devices ( STA A,B,C, D) are sited as shown above, with distances and obstructions as listed.

Station D is intended to have the highest throughput, sitting 10.5' away with no significant obstructions. Stations B and C are progressively a little farther and a little less clean a shot to the access point; and then there's station A. Station A sits at 19', four interior walls, and some miscellaneous cabinetry away from the AP. This is farther than you should plan to deliberately support with any access point in a multi-AP deployment. But despite your best planning, you always end up with one like this, so we test it.

Samsung Chromebook on its test stand at Station D

Samsung Chromebook on its test stand at Station D

The stations themselves are four identical Samsung Chromebooks running GalliumOS equipped with a built-in Intel Dual Band Wireless-AC 7265 and Linksys WUSB-6300 external wireless adapters. The WUSB-6300 is my reference adapter and used for most performance testing, because there's less variation between the four NICs than there is between the internal NICs in the Chromebooks, and also because they have exceptional TX performance.

The current Linux driver for the WUSB-6300 does not support 802.11k or v, though, so we shift back to the onboard Intel 7265 for testing of roaming and band steering behavior. I have not been able to find definitive information from Intel regarding the technologies supported in their Linux driver. However, Microsoft says Windows 10 supports 802.11k,v and r and Intel says the Wireless-AC 7265 (Rev. D) also supports all three roaming assistance standards. Empirically, it is very clear that AP-assisted roaming works on the AC 7265 under Linux, due to the dramatic differences in behavior when connected to different models of AP.

Performance Tests

Performance testing is done in two phases, both using netburn to load the stations with HTTP traffic. The single-station phase downloads a 1 MB file repeatedly, with no other traffic on the network. This is done for 2.4 GHz and 5 GHz, with separate test runs using Station D (the nearest and best sited) and Station A (the furthest and worst sited). This single-client testing is roughly similar to an iperf3 run, but it's a little more heavily impacted by TX performance of the station because it must repeatedly issue an HTTP request.

What is netburn?

netburn is part of a suite of open source tools written in Perl, used for testing network performance using HTTP to generate network traffic. By using different filesizes and concurrent downloads, netburn can more realistically simulate real-world network traffic than single or even multi-stream TCP/IP or UDP transfers using tools like IxChariot or iperf.

netburn can not only measure raw throughput, but also how a Wi-Fi system responds under multi-client load by measuring the time it takes to issue a set of HTTP requests and when the last of the requests is complete. This method can measure application latency, which in turn reveals weaknesses in how well a wireless router or access point schedules airtime.

If the AP does a poor job of managing airtime, stations will have to wait longer to issue their HTTP requests, which in turn delays request completion. The real-world effect of HTTP delay can vary from slow webpage loading to poor video streaming performance. Raw throughput also plays a part in performance, since a faster AP can get the same 2 MB total of "page" data delivered in less airtime than a slower one.

The second phase is multi-client testing, in which all four stations are active simultaneously. Here the net-hydra controller is used to kick off a session of netburn on each station simultaneously. Netburn is configured to fetch sixteen 128 Kbyte files in parallel via HTTP, representing a "page load", and throttled to 8 Mbps per station overall by sleeping between "page" fetches. Up to 500 ms of jitter is injected randomly to avoid pathological pattern interaction between stations. The test is run for 5 minutes, after which application latency is plotted and analyzed.

I have found netburn-based multiple client testing is a much more reliable indicator of the actual experience of using a Wi-Fi system, compared to a simpler iperf3 test, or even the single-client maximum throughput tests done with netburn.

Instead of just looking at an arithmetic mean of the results returned by our 5 minute test run, I prefer to look across the entire dataset. (Actually, that's not quite true—I focus on the worst half of the dataset.) For each individual HTTP fetch operation, we get an application latency in milliseconds; for this particular test payload, it's the latency between the time sixteen separate HTTP GETs for a 128 KB file go out in parallel, and the time the last of the sixteen are delivered.

Wi-Fi—at least, omnidirectional Wi-Fi—is very much a best-effort service. You cannot expect the sort of well-behaved, tightly-clustered results you'd get from a wired Ethernet network. So we take a large set of datapoints, order them from best (fastest) to worst (slowest), and organize them into percentiles. We can then make a line graph displaying each STA's results as shown below. The 50% mark represents the median—half of the latencies measured for the STA were better than the value at the 50% mark, and half were worse. Similarly, at the 75th percentile, 75% were better and 25% were worse.

sample 5 GHz multi-client test results

Sample set of 5 GHz multi-client test results

It's important to evaluate the performance of a network across the range of results, because that's how humans using it will actually perceive it. Nobody wants a network that's 25% faster on average, if it's also completely broken for 10% of all requests!

In our example above, we see an access point doing a pretty good job of servicing all STAs. The lines are pretty tightly clustered, and they stay below 1500 ms all the way to the 95th percentile mark on the X axis. This means that all four STAs will get their "webpage" in less than 1500 ms for 19 out of 20 attempts. 1500 ms (1.5 seconds) is a relatively arbitrary delay that we're declaring "decent results" for this test.

We can see things go off the rails for STA D after the 95th percentile; its line has a "knee" going almost purely vertical at the 95th percentile mark. What this means is that at least one page load during the test run was not completed, and timed out at 60 seconds. A few (fewer than 5%) page loads going astray in a relatively challenging test like this is not really a death sentence. But it does indicate that every now and again, when the network is really busy, you're going to have users needing to click "refresh" in a browser.

Roaming Tests

Roaming is the process of moving a wireless device's (STA) established Wi-Fi network association from one access point to another without losing connection. To test roaming, I set up a second AP—in controller, cluster, or managed mode, if available—as shown in the floorplan below. I connect the Chromebook's onboard Intel AC 7265 to the WLAN, and start up a script that makes it loudly go "BING!" each time the connected BSSID or frequency changes.

I then walk a predetermined path through the house. Starting from Station D, 10.5' from the first access point, I walk around the living room island to the farthest wall, about 10' behind Station B. At this point, the second access point is a good twenty feet closer than the first. If a roaming event doesn't occur within ten seconds or so, I'll start doing some iperf3 runs—sometimes access points don't trigger roaming events on idle stations.

Test floorplan - larger view

Test floorplan—larger view

After the STA does (or doesn't) move from AP 1 to AP2, I then walk downstairs, and to the farthest corner of the basement floor. This should ideally trigger a bandsteering event from 5 GHz to 2.4 GHz on AP2, since it's far enough away from both access points that 2.4 GHz is substantially more effective. After giving the access point time and prompting to switch APs and/or bands at both stations, I walk back in the reverse order, and see how quickly the STA switches first back to 5 GHz on AP2 at the top of the stairs, and then to 5 GHz on AP1 as I return back to Station D.

This roaming evaluation isn't a perfect science, since I have yet to figure out how to capture an 802.11k or 802.11v BSS transition management frame in flight. And not all APs (or STAs) use 802.11k,v or r to roam anyway. With that said, it's extremely clear that the APs themselves do make a difference—roaming is extremely "sticky" and slow to occur with some kits, and so rapid and hyperactive as to be annoying with others.

While I have multiple APs set up, I also evaluate ease of deployment and management, as well as the quality of roaming, band-steering, and how well the system distributes stations between available access points.

To the tests!

More Wireless

Wi-Fi System Tools
Check out our Wi-Fi System Charts, Ranker and Finder!

Support Us!

If you like what we do and want to thank us, just buy something on Amazon. We'll get a small commission on anything you buy. Thanks!

Over In The Forums

Hi everyone, long-time Merlin user here, and I've been wondering about this for forever but finally decided to do some research and after coming up sh...
Recently I've change my ISP. Now I'n on FTTB, the switch witch belong to the ISP is in the basement huawei with 8 10/100 ports, from there cat5e cable...
I'm using freedns.afraid.org for my DDNS and LetsEncrypt and I keep seeing the following in the logs (I'm just wondering what it means):Code: Oct 16 ...
I have a Asus RT-AC1200 wireless router. I currently have time limits set up on kids Xbox & PS4 however not sure if it works properly, my son was up a...
I'm Having the latest firmware Version:384.7 on my RT-AC86U. Does anybody know what this means? As shown below, it will return every 30 mins.. I have ...

Don't Miss These

  • 1
  • 2
  • 3