Why ZFS?: Introduction

Photo of author

Tim Higgins

Editor’s Note: Infortrend is the only manufacturer we know of using ZFS in a commercially-produced NAS. So we asked them if they would like to make the case for their selection of ZFS, especially since our testing showed the performance penalty that this filesystem is known for.

We realize that this is providing Infortrend a “bully pulpit” that could be viewed as self-promotion. So we welcome opposing fact-based opinions in the Forums or even as its own article(s).


The debut of ZFS in 2004 introduced a new approach to managing data systems. Originally referred to as the “Zettabyte File System”, the 128-bit ZFS has a massive potential storage capacity for up to 256 quadrillion zettabytes. Not only is ZFS highly scalable, but built-in data protection features offer benefits exclusive to ZFS.

ZFS does away with the concept of disk volumes, partitions and disk provisioning by adopting pooled storage, where all available hard drives in a system are essentially joined together. The combined bandwidth of the pooled devices is available to ZFS, which effectively maximizes storage space, speed and availability.

Instead of pre-allocating metadata like other file systems, ZFS utilizes dynamically allocated metadata as needed, with no initial space required at the initialization and no limit on the files or directories supported by the file system. Because of this, ZFS’s total number of files depends only on the amount of available space in the storage pool.

ZFS vs. Silent Data Corruption

As storage devices get larger and faster, one of the key issues is silent data corruption. “Silent data corruption” refers to errors that are generated during the write-to-disk process. This type of data corruption can be due to bit rot, spikes in the power supply, firmware bugs, ghost writes or even a faulty cable. Because silent data corruption directly affects data and files and not drives, neither the disk firmware nor the operating system is able to detect the corrupted data blocks.

Silent data corruption has the potential to damage the file system and thus impacts businesses and organizations. For instance, it has been reported that an e-commerce company was forced to shut down temporarily because of some bugs in the file system manager that caused faulty data to be implemented into their database.

Furthermore, CERN discovered in a series of experiments that in 14.6 petabytes of data generated, approximately 38,000 files were silently corrupted. With single drive capacity at 4 TB and continuing to increase and wider availability of NASes supporting eight and even more drives, without careful prevention, silent data corruption could pose major threats to data storage.

ZFS is designed with proven end-to-end data integrity as its utmost priority. A unique checksum mechanism enables a ZFS-based system to constantly scan the storage pool to ensure data integrity. Given that the storage pool has redundant data protection via ZFS mirroring or RAID, data affected by silent data corruption can be self-healed automatically.

ZFS vs. Other File Systems

As mentioned above, ZFS is specifically designed to ensure data integrity by detecting and healing “silent” data corruption. It is also designed “to reduce complexity and ease administration” since it does not need a separate volume manager. Other file systems may have to rely on other software or tools, which can add to management burden.

The “Elephant in the Room”

For all its benefits, ZFS’ main disadvantages are its CPU and memory requirements. For a given CPU and RAM configuration, a ZFS system will be slower. RAM, in particular, is singled out as being key to ZFS deduplication and caching performance. ZFS will actively seek to use all the RAM it can in order to cache data.

ZFS & SSD Caching

This article by Constantin Gonzalez, describes how SSD’s can be used to improve ZFS performance by storing the ZFS intent log and Adaptive Replacement Cache. The excerpts below capture the key points:

ZFS uses the ZFS intent log (ZIL) as a logging mechanism to store synchronous writes, until they are safely written to the main data structure on the storage pool. The speed at which data can be written to the ZIL determines the speed at which synchronous write requests can be serviced. By using fast SSD as a ZFS log device, you accelerate the ZIL and improves the synchronous write performance.

ZFS also has a sophisticated cache called the “Adaptive Replacement Cache” (ARC) where it stores both most frequently used blocks of data and most recently used ones. Blocks that cannot be stored in the RAM-based ARC can be stored on SSDs instead so they can be delivered quickly to the application when needed. An SSD that is used as a second level ARC is therefore called an L2ARC, Level-2 ARC or a “cache device”.

Next time, we’ll be talking about Storage Pools and Data deduplication. See you then

William Chen is Director of SMB Product Strategy for Infortrend, a Taiwan-based manufacturer of high-performance networked storage systems.

Related posts

Data Recovery Tales: Recovering A Really Big Storage Space Pool

A storage pool of double-digit Terabyte size may sound like a good idea... until the time comes to recover it.

Data Recovery Tales: When Windows Storage Spaces Go Bad

Windows 8 Server's Storage Spaces will require data recovery tool makers to step up their game.

Introducing SmallNetBuilder’s NAS Ranker

We have a new and easier way to find the best performing NASes.