As far as RAID configuration, Figure 8 shows the setup screen where I have selected RAID level 5, which reduces usable capacity by about 25%, but provides the ability to recover from any single disk failure.
Figure 8: RAID setup
The initial RAID format took around 20 minutes. Note the warning on the lower left part of the screen where there is an erroneous indication that my browser (Safari) is not "DHTML capable".
Promise has been doing RAID controllers for quite a while, so they know about RAID capabilities. And as a reflection of this, there are a couple of options that I don't normally see in a consumer-level NAS device. Figure 9 shows the menu where the RAID level can be selected when creating an array.
Figure 9: RAID creation
The first unusual option is the ability to specify that a drive is to be designated as unused, or "spare". The other unusual option is RAID level 10, which is a combination of RAID levels 0 and 1. This nested RAID level provides the benefit of RAID level 0, mirroring, with the fault-tolerance of RAID level 1.
Speaking of fault-tolerance, I wanted to see how the SmartStor performed when a disk failed. To simulate a failure, I yanked a drive out of my RAID 5 array while the device was running.
The first thing I noticed was the LED for the drive going dark. Shortly thereafter, I heard two short beeps that repeated every 15-20 seconds. I appreciated the notification, but if I had to run in this mode for a while, the beeping would get quite annoying. Fortunately, there is a menu option to completely turn off the beeper. Checking out the status screen (Figure 10) showed that my RAID array was in a "critical" state.
Figure 10: RAID status
A quick test showed that although I was in a critical state, all my data was still available as normal.
To check out recovery, I then hot-plugged the drive back in. When I did this, the beeping stopped, the LED went to red and the recovery process automatically started. A check of the status screen (Figure 11) now showed that I was in a rebuilding state and I got a little progress bar telling me how far along I was in the process.
Figure 11: RAID Rebuild
But this rebuild took a long time. I checked on it on and off for a while, but I had to leave the house after about 14 hours—when I was only 73% complete. Fortunately this rebuild was all automatic and going on in the background. My data was available throughout this process, but I would expect that performance would be degraded.