Updated 5/3/2007 - Added cost data
When I started a project to make a collection of over 2000 CDs network-available, I was using a NETGEAR ND520 drive, but it turned out to be incredibly slow and rather unreliable. I was faced with a "buy-or-build" decision for a more reliable server with higher capacity. The lack of a good 1 RU (1.75") OEM chassis that could hold four drives turned me toward pre-built solutions and I selected the Snap Server 4100. (Snap has gone through many incarnations, first as Meridian Data, then a part of Quantum, then an independent company again, and now they are owned by Adaptec. Maybe Quantum was upset that they wouldn't use Quantum-branded drives in their products, instead preferring IBM drives.)
The 4100 is a 1 RU chassis which came with four drives of varying sizes - the smallest model I've seen with "factory" drives had 30GB drives, and the largest had 120GB drives. They can be used in a variety of RAID/JBOD configurations. In a RAID 5 configuration, usable capacity is about 3/4 of the total drive capacity, minus about 1GB for overhead.
Since then, I've upgraded the 4100's to larger and larger drives, with the most recent incarnation holding four 120GB drives, for about 350GB of usable storage. These units are quite common on the used market, particularly with smaller capacity drives installed. The Dell 705N is an OEM version of the 4100 which can also be upgraded.
Unfortunately, Snap decided not to support 48-bit addressing on the 4100, which means that drives larger than approximately 137GB can't be used at their full capacity. Despite a number of negotiations to have this feature implemented on the 4100, Snap finally decided not to do so. I can't really blame themthe 4100 is a nearly-discontinued product, and adding a feature to let customers put larger drives in "on the cheap" really isn't in their best interest.
Snap's decision meant that I would either need to add more 4100's to my already large collection (I had 13 of the 4100 units now) or I would need to change to newer hardware which would support larger drives.
I decided to build my own servers this time, for a number of reasons:
- More Snap 4100 units would just take up more space, and I'd need to manually balance usage between the servers.
- Snap 4100 performance isn't that greatabout 35Mbit/sec when reading natively (Unix NFS) and about 12Mbit/sec when reading using Windows file sharing.
- Good OEM cases were now available in a variety of sizes.
- By building the system myself, I'd be aware of any hardware or software limitations and could address them myself.
- I could save money by building my own.
When I started planning for this project, the Western Digital WD2500 was the largest single drive available, at 250GB. Western Digital called these drives "Drivezilla", so calling a chassis with 24 of them "RAIDzilla" was an obvious choice.
However, when I actually started pricing parts for the system, Seagate had just announced a 400GB drive with a 5-year warranty. So I changed my plans and decided to build a server with 16 of the 400GB Seagate Barracuda 7200.8 SATA drives (ST3400832AS) instead of 24 of the Western Digital 250GB drives.
I also planned on using FreeBSD as the operating system for the server, but I needed features which were only available in the 5.x release family, which was in testing at the time. Between these delays, the complete system didn't get integrated (that's computer geek for "put together") until early January, 2005. Fortunately, pieces of it were up and running for six months or so by then, so I had a lot of experience with what would work and what wouldn't when the time came to build the first production server.