With our volume in hand, we can now create a network share. Go to the Shares tab. Clicking on our volume, NetArray, will get a dialog (Figure 23) that allows us to create a directory. Name the directory Shares.
Figure 23: Share creation
Clicking the Shares directory will give us a different dialog (Figure 24), this is the doorway to the Edit Shares page where we can set a netbios name and set up permissions for our directory.
Figure 24: Share edit gateway
Just click Make Share and the edit page (Figure 25) will come up.
Each item is updated separately on this page. First we want to override the default name, which would look like a path specifier ( nas.netarray.shares) with just Shares. Once entered, click Change.
We want Public guest access, i.e. we don’t have any kind of authentication set up at this point. Check it and click Update. You’ll notice that the bottom of the page has changed, you can now configure the permissions for our WoofPack network.
Figure 25: Share edit page
We want to grant read & write permissions to everyone on the WoofPack network, and by extension everyone in our Windows workgroup. Under SMB/CIFS, push the radio button for RW. Click Update which will restart Samba with the new permissions. Our job here is finished.
As you can see we’ve taken the path of least resistance: one volume group, the whole singular array, one volume, one share, and read/write permissions for everyone. This makes sense because in the next installment of this article we are going to tear all this up. We took a low risk approach of creating our SAN, and this NAS is the first step. Creating the NAS allowed us to set up our initial disk array, familiarize ourselves with Openfiler, and create a performance baseline.
Now travel over to Windows, and you be able to mount Shares as a drive (Figure 26).
Figure 26: Share mounted from Windows
Our build, Old Shuck, delivers 14 TB. All that is left is to take a measure of the performance strictly as a NAS and sum up, so we can get to the cool part, the SAN configuration.
Using Intel’s NAS Performance Toolkit (NASPT), we are running the same tests that new entrants to SNB’s NAS chart go through. This will let us determine if we hit our performance goals. We are not going for the gold here, we are just getting a feel for the kind of performance we can expect with our future NAS to SAN tests, and so we can see the kind of improvement that moving to a SAN offers.
Figure 27: NAS Performance test configuration
This is the first of three tests, NAS performance. The second will be SAN performance, and the last, SAN as NAS performance. All NAS performance tests are going to be done on a dual core 3GHz Pentium with 3 GB of memory running Windows 7, over a Gigabit Ethernet backbone. The SAN performance will be done from the DAS Server.
Each test will be run three times, the best of the three will be presented, a slight advantage, but you’ll get to see a capture of the actual results instead of a calculated average. Figure 28 shows the results of the plain old NAS test:
Figure 28: NAS Performance test results
You may remember, when we formatted the 3Ware RAID Array we selected a stripe size of 256K, largely because the array is going to store compressed backups and media files, in other words, large files. You can see the hit we take in performance here. Media performance is outstanding, but the benchmarks around small files (Content Creation, Office Productivity) suffered. The oddest result was ‘File Copy To NAS’ which varied from 59 to 38 in our tests.
Let’s see how these numbers compare to some current SNB chart leaders in Figure 29.
Figure 29: NAS Performance comparison
Other than the odd File Copy To NAS results and a poor performance around the small file centric office productivity, we were with the pack throughout, besting the charts in directory copy from our NAS, and in the media benchmarks which is wholly expected.
Other than some disappointments with Openfiler, which we’ll cover in our conclusion, the NAS build was very straightforward. All of our components went together without a hitch and performance was more than acceptable – especially given the fact that the next closest consumer NAS in capacity is $1700 and delivers less performance. I don’t know about you, but I’m looking forward to the next set of tests.
In the next part of our series, we’ll buy and install our fiber HBAs, and configure Old Shuck as a SAN. Will it work? Will we be able to hit our price and performance goals? Stay tuned…