Putting it All Together
Figure 3: Old Shuck with refreshment
Now that we have our components, we only have three steps to convert a bunch of brown boxes into a full-fledged NAS:
- Assemble: Build our the server, put the hardware pieces together
- Install: Set up the RAID controller and install Openfiler
- Configure: Define our volumes, set up the network and shares
Assembly
The cobbling part of this went really smoothly, with only a couple of unpredictable gotchas.
Following the advice over in AVSForum for the Norco 4020, the fan brace was removed first, the motherboard installed and power supply screwed into place. While wiring the power supply to the motherboard, we ran into our first gotcha.
Supermicro server boards of this generation require the 24-pin ATX connector, an 8-pin P8 connector, and a 4-pin ATX12v P4 Connector. Our new Corsair power supply, which came in a weird black velveteen bag, could only handle two of the three—there is no modular connector specifically for the P4 jack. So we had to scramble to find an obscure Molex to P4 adaptor. Advice to Corsair: Drop the bag and add a P4 connector (Want to keep the bag? Drop the two Molex-to-floppy connectors.)
Figure 4: Rear view of drive bay
There is no documentation for the Norco chassis, which doesn’t help explain redundant power connectors on the SATA backplane. You only need to fill five, the rest are redundant.
Figure 5: SATA backplane ready for wiring
The other gotcha was the CMOS battery, which I discovered was limping on its last legs and required replacement. This is a common problem if a motherboard has been sitting fallow for a number of months, like many swapped out servers. With a dead CMOS battery, the system wouldn’t even blink the power LED.
Additionally, care should be taken to exit the BIOS before powering down. It is tempting to do a power-up test of your just arrived parts, even though you don’t have everything you need to do the build. But powering off while in the BIOS may require a CMOS reset.
Figure 6: Inside view with Supermicro motherboard mounted
To avoid this, take my advice, wait for all the components to arrive before embarking on the build. I’m sure there is some suitable coital metaphor for the anxiety that ensues when you are left waiting for that last critical thing to come together. Just put the boxes aside and listen to some Blues.
Fair warning about the SATA cables: buy simple flat drive cables. Odd angles, catches, and head sizes often are not compatible with connector-dense RAID cards. When connecting your cables, threading them through the fan brace to your RAID card before mounting the card or the brace, eases things along.
Figure 7: Inside view of full assembly
You are going to want to put the RAID card in slot #1 which runs at 100 Mhz, leaving slot #3 for the fiber card. If you put the card in the faster slot, you are probably going to see a blast from the past, an IRQ conflict error. Disabling the onboard SCSI will resolve this, probably a good idea anyway, since it is an extra driver layer we won’t be using.
On the case, care should be taken with the Molex connectors, they are a bit flimsy. When connecting them, treat them gingerly and maybe tie them. I had a near meltdown when a loose connection took out the fan brace.
The drive caddies are not the tension type, and putting in the four eyeglass screws needed to mount each drive is a little time consuming.
Figure 8: Old Shuck with nine drives
Important! Even if you have a fiber card already, do not install it yet. It will cause conflicts at this point.