It is the nature of ranking tools that use raw performance data that they can provide a false sense of performance difference. Put a different way, is there actually a real-world difference between a product with 109.7 MB/s write throughput and one with 109.6 MB/s? Of course not, but you would think so by simply using the Ranker results.
So we have made two changes to Ranker logic. For each benchmark that goes into a ranking, we have added the ability to limit the benchmark value before it is evaluated. The second change is that we can now set a ranking tolerance for each benchmark, so that a range of values is used when comparing benchmark values during ranking.
We have used the limit feature to set a 125 MB/s cap on [NASPT] File Copy To NAS, [NASPT] RAID 1 File Copy To NAS, [NASPT] RAID 5 File Copy To NAS and [NASPT] RAID 10 File Copy To NAS benchmark results. This will take write cache effects out of the ranking process, since anything above the transfer limit of a Gigabit Ethernet connection is reflecting cache effects.
We have also set the ranking tolerance for all benchmarks used for ranking at 5%. This means that benchmark results must differ more than 5% for them to be ranked differently. We chose 5% because it's a reasonable reflection of the margin of error of our test process. And quite frankly, is a product with 100 MB/s write throughput really better than one with 95 MB/s?
If a benchmark has a limit applied for ranking, you'll see the capped value in the Ranker Performance summary. You won't see any difference in the Ranker Performance Summary for benchmarks with tolerances applied, since the measured values are not changed for the ranking calculation.