RAID Fail Test
I test all RAID-capable NASes to see how they perform when a drive "dies". In this case, I had already set the Box up for RAID 1, so I should be protected against a single disk failure. To see how it worked, I yanked out a drive while everything was up and running.
The first thing I noticed was... nothing. I heard no warning beeps. I saw no LED change. Since RAID 1 transparently handles disk-failure, all of my data was available just as before. But when I logged in as an administrator, I saw no warnings or anything in the internal notes display.
If the box had a logging feature, which it doesn't, it might have shown me something. And since my email setup failed, I received no email alert. Even when I dug down into the disk status menus, everything was shown as normal (Figure 12).
Figure 12: Disk Status
I almost went back to the Box to see if I really did pull the drive! But when I finally hit the "Update" button shown in Figure 12, only then did I get an indication that something was wrong. It took awhile, but eventually I got an error indicating that the display couldn't be updated. It didn't tell me that a disk had failed, it only said it couldn't update the display. Very odd. So the only option I had was to reboot.
When the Box came back up, once again I had no indication that anything was wrong; I just had one fewer disk. To see if I could recover, I shut down, replaced the drive and booted back up. This time, the "Status" display from Figure 12 indicated that a recovery was underway, with a percentage-done display. At least that was automatic.
I let the recovery run overnight, and in the morning I logged in and still had a "percentage-done" of 2% and I thought it was stuck. But then I found that the user is responsible for updating the display by hitting the "Update" button. Just reloading the page or re-logging in doesn't do it. The display is static until you manually hit the "Update" button. Odd.
The recovery is also quite slow. With my little 80 GB drives, after running all night, I was only 57% compete. Who knows how long it would take with a pair of 1TB drives? Mvix has some work to do with the whole RAID-recovery process.
We used IOzone to test the file system performance on the Box (the full testing setup and methodology are described on this page). I tested with 188.8.131.52 firmware in RAID 0 ("Extension") and RAID 1, with 100 Mbps and 1000 Mbps LAN connections.
Figure 13 shows a comparison of the Box' write benchmarks. It's nice to see that performance is higher when using a Gigabit Ethernet connection and that you're not giving up any speed by using RAID 1. But average write throughput for the large file sizes from 32 MB to 1 GB is only 7.4 MB/s, which ranks the Box at the very bottom of the 1000 Mbps Write charts when you filter for dual-drive products.
Figure 13: Write benchmark comparison
Figure 14 shows the read benchmarks compared, which are evenly matched to write performance. Average read throughput for the large file sizes from 32 MB to 1 GB is 8.2 MB/s, which, while a bit better than write, still ranks the Box at the very bottom of the 1000 Mbps Read charts when you filter for dual-drive products.
Figure 14: Read benchmark comparison
You can choose your own products to compare using the NAS Charts, but I tried out a couple of other lower-end devices to give you a head start. I chose the D-Link DNS-323 and the Synology DS207. Note that both of these other NASes support jumbo frames, which the MvixBOX doesn't. So the following charts are without jumbo frames.
In the write test (Figure 15), you can see that the MvixBOX gets badly beaten with the D-Link DNS-323 being more than twice as fast in most cases.
Figure 15: Comparative write test - 1000 Mbps LAN
For the read comparison shown in Figure 16, the relative rankings remain the same, but performance is much more closely matched.