ZFS explained

ZFS
Variants:Oracle ZFS, OpenZFS
Developer:Sun Microsystems originally, Oracle Corporation since 2010, OpenZFS since 2013
Introduction Os:OpenSolaris
Directory Struct:Extensible hash table
Max Filename Size:255 ASCII characters (fewer for multibyte character standards such as Unicode)
Max Volume Size:256 trillion yobibytes (2128 bytes)
Max Capacity:256 UB (2128 bytes)
Max File Size:16 exbibytes (264 bytes)
Forks Streams:Yes (called "extended attributes", but they are full-fledged streams)
Attributes:POSIX, extended attributes
File System Permissions:Unix permissions, NFSv4 ACLs
Compression:Yes
Data Deduplication:Yes
Encryption:Yes
Copy On Write:Yes
Os:

ZFS (previously Zettabyte File System) is a file system with volume management capabilities. It began as part of the Sun Microsystems Solaris operating system in 2001. Large parts of Solaris, including ZFS, were published under an open source license as OpenSolaris for around 5 years from 2005 before being placed under a closed source license when Oracle Corporation acquired Sun in 20092010. During 2005 to 2010, the open source version of ZFS was ported to Linux, Mac OS X (continued as MacZFS) and FreeBSD. In 2010, the illumos project forked a recent version of OpenSolaris, including ZFS, to continue its development as an open source project. In 2013, OpenZFS was founded to coordinate the development of open source ZFS.[2] [3] [4] OpenZFS maintains and manages the core ZFS code, while organizations using ZFS maintain the specific code and validation processes required for ZFS to integrate within their systems. OpenZFS is widely used in Unix-like systems.[5] [6] [7]

Overview

The management of stored data generally involves two aspects: the physical volume management of one or more block storage devices (such as hard drives and SD cards), including their organization into logical block devices as VDEVs (ZFS Virtual Device)[8] as seen by the operating system (often involving a volume manager, RAID controller, array manager, or suitable device driver); and the management of data and files that are stored on these logical block devices (a file system or other data storage).

Example: A RAID array of 2 hard drives and an SSD caching disk is controlled by Intel's RST system, part of the chipset and firmware built into a desktop computer. The Windows user sees this as a single volume, containing an NTFS-formatted drive of their data, and NTFS is not necessarily aware of the manipulations that may be required (such as reading from/writing to the cache drive or rebuilding the RAID array if a disk fails). The management of the individual devices and their presentation as a single device is distinct from the management of the files held on that apparent device.

ZFS is unusual because, unlike most other storage systems, it unifies both of these roles and acts as both the volume manager and the file system. Therefore, it has complete knowledge of both the physical disks and volumes (including their status, condition, and logical arrangement into volumes) as well as of all the files stored on them. ZFS is designed to ensure (subject to suitable hardware) that data stored on disks cannot be lost due to physical errors, misprocessing by the hardware or operating system, or bit rot events and data corruption that may happen over time. Its complete control of the storage system is used to ensure that every step, whether related to file management or disk management, is verified, confirmed, corrected if needed, and optimized, in a way that the storage controller cards and separate volume and file managers cannot achieve.

ZFS also includes a mechanism for dataset and pool-level snapshots and replication, including snapshot cloning, which is described by the FreeBSD documentation as one of its "most powerful features" with functionality that "even other file systems with snapshot functionality lack".[9] Very large numbers of snapshots can be taken without degrading performance, allowing snapshots to be used prior to risky system operations and software changes, or an entire production ("live") file system to be fully snapshotted several times an hour in order to mitigate data loss due to user error or malicious activity. Snapshots can be rolled back "live" or previous file system states can be viewed, even on very large file systems, leading to savings in comparison to formal backup and restore processes. Snapshots can also be cloned to form new independent file systems. ZFS also has the ability to take a pool level snapshot (known as a "checkpoint"), which allows rollback of operations that may affect the entire pool's structure or that add or remove entire datasets.

History

See also: Solaris (operating system), OpenSolaris, OpenIndiana, illumos and Sun Microsystems.

2004-2010: Development at Sun Microsystems

In 1987, AT&T Corporation and Sun announced that they were collaborating on a project to merge the most popular Unix variants on the market at that time: Berkeley Software Distribution, UNIX System V, and Xenix. This became Unix System V Release 4 (SVR4).[10] The project was released under the name Solaris, which became the successor to SunOS 4 (although SunOS 4.1.x micro releases were retroactively named Solaris 1).[11]

ZFS was designed and implemented by a team at Sun led by Jeff Bonwick, Bill Moore,[12] and Matthew Ahrens. It was announced on September 14, 2004,[13] but development started in 2001.[14] Source code for ZFS was integrated into the main trunk of Solaris development on October 31, 2005[15] and released for developers as part of build 27 of OpenSolaris on November 16, 2005. In June 2006, Sun announced that ZFS was included in the mainstream 6/06 update to Solaris 10.[16]

Solaris was originally developed as proprietary software, but Sun Microsystems was an early commercial proponent of open source software and in June 2005 released most of the Solaris codebase under the CDDL license and founded the OpenSolaris open-source project.[17] In Solaris 10 6/06 ("U2"), Sun added the ZFS file system and frequently updated ZFS with new features during the next 5 years. ZFS was ported to Linux, Mac OS X (continued as MacZFS), and FreeBSD, under this open source license.

The name at one point was said to stand for "Zettabyte File System",[18] but by 2006, the name was no longer considered to be an abbreviation.[19] A ZFS file system can store up to 256 quadrillion zettabytes (ZB).

In September 2007, NetApp sued Sun, claiming that ZFS infringed some of NetApp's patents on Write Anywhere File Layout. Sun counter-sued in October the same year claiming the opposite. The lawsuits were ended in 2010 with an undisclosed settlement.[20]

2010-current: Development at Oracle, OpenZFS

See main article: articles. Ported versions of ZFS began to appear in 2005. After the Sun acquisition by Oracle in 2010, Oracle's version of ZFS became closed source, and the development of open-source versions proceeded independently, coordinated by OpenZFS from 2013.

Features

Summary

Examples of features specific to ZFS include:

Data integrity

One major feature that distinguishes ZFS from other file systems is that it is designed with a focus on data integrity by protecting the user's data on disk against silent data corruption caused by data degradation, power surges (voltage spikes), bugs in disk firmware, phantom writes (the previous write did not make it to disk), misdirected reads/writes (the disk accesses the wrong block), DMA parity errors between the array and server memory or from the driver (since the checksum validates data inside the array), driver errors (data winds up in the wrong buffer inside the kernel), accidental overwrites (such as swapping to a live file system), etc..

A 1999 study showed that neither any of the then-major and widespread filesystems (such as UFS, Ext,[21] XFS, JFS, or NTFS), nor hardware RAID (which has some issues with data integrity) provided sufficient protection against data corruption problems.[22] [23] [24] [25] Initial research indicates that ZFS protects data better than earlier efforts.[26] [27] It is also faster than UFS[28] [29] and can be seen as its replacement.

Within ZFS, data integrity is achieved by using a Fletcher-based checksum or a SHA-256 hash throughout the file system tree.[30] Each block of data is checksummed and the checksum value is then saved in the pointer to that block—rather than at the actual block itself. Next, the block pointer is checksummed, with the value being saved at its pointer. This checksumming continues all the way up the file system's data hierarchy to the root node, which is also checksummed, thus creating a Merkle tree. In-flight data corruption or phantom reads/writes (the data written/read checksums correctly but is actually wrong) are undetectable by most filesystems as they store the checksum with the data. ZFS stores the checksum of each block in its parent block pointer so that the entire pool self-validates.

When a block is accessed, regardless of whether it is data or meta-data, its checksum is calculated and compared with the stored checksum value of what it "should" be. If the checksums match, the data are passed up the programming stack to the process that asked for it; if the values do not match, then ZFS can heal the data if the storage pool provides data redundancy (such as with internal mirroring), assuming that the copy of data is undamaged and with matching checksums.[31] It is optionally possible to provide additional in-pool redundancy by specifying (or), which means that data will be stored twice (or three times) on the disk, effectively halving (or, for, reducing to one-third) the storage capacity of the disk.[32] Additionally, some kinds of data used by ZFS to manage the pool are stored multiple times by default for safety even with the default copies=1 setting.

If other copies of the damaged data exist or can be reconstructed from checksums and parity data, ZFS will use a copy of the data (or recreate it via a RAID recovery mechanism) and recalculate the checksum—ideally resulting in the reproduction of the originally expected value. If the data passes this integrity check, the system can then update all faulty copies with known-good data and redundancy will be restored.

If there are no copies of the damaged data, ZFS puts the pool in a faulted state,[33] preventing its future use and providing no documented ways to recover pool contents.

Consistency of data held in memory, such as cached data in the ARC, is not checked by default, as ZFS is expected to run on enterprise-quality hardware with error correcting RAM. However, the capability to check in-memory data exists and can be enabled using "debug flags".[34]

RAID ("RAID-Z")

For ZFS to be able to guarantee data integrity, it needs multiple copies of the data, usually spread across multiple disks. This is typically achieved by using either a RAID controller or so-called "soft" RAID (built into a file system).

Avoidance of hardware RAID controllers

While ZFS can work with hardware RAID devices, it will usually work more efficiently and with greater data protection if it has raw access to all storage devices. ZFS relies on the disk for an honest view to determine the moment data is confirmed as safely written and has numerous algorithms designed to optimize its use of caching, cache flushing, and disk handling.

Disks connected to the system using a hardware, firmware, other "soft" RAID, or any other controller that modifies the ZFS-to-disk I/O path will affect ZFS performance and data integrity. If a third-party device performs caching or presents drives to ZFS as a single system without the low level view ZFS relies upon, there is a much greater chance that the system will perform less optimally and that ZFS will be less likely to prevent failures, recover from failures more slowly, or lose data due to a write failure. For example, if a hardware RAID card is used, ZFS may not be able to determine the condition of disks, determine if the RAID array is degraded or rebuilding, detect all data corruption, place data optimally across the disks, make selective repairs, control how repairs are balanced with ongoing use, or make repairs that ZFS could usually undertake. The hardware RAID card will interfere with ZFS' algorithms. RAID controllers also usually add controller-dependent data to the drives which prevents software RAID from accessing the user data. In the case of a hardware RAID controller failure, it may be possible to read the data with another compatible controller, but this isn't always possible and a replacement may not be available. Alternate hardware RAID controllers may not understand the original manufacturer's custom data required to manage and restore an array.

Unlike most other systems where RAID cards or similar hardware can offload resources and processing to enhance performance and reliability, with ZFS it is strongly recommended that these methods not be used as they typically reduce the system's performance and reliability.

If disks must be attached through a RAID or other controller, it is recommended to minimize the amount of processing done in the controller by using a plain HBA (host adapter), a simple fanout card, or configure the card in JBOD mode (i.e. turn off RAID and caching functions), to allow devices to be attached with minimal changes in the ZFS-to-disk I/O pathway. A RAID card in JBOD mode may still interfere if it has a cache or, depending upon its design, may detach drives that do not respond in time (as has been seen with many energy-efficient consumer-grade hard drives), and as such, may require Time-Limited Error Recovery (TLER)/CCTL/ERC-enabled drives to prevent drive dropouts, so not all cards are suitable even with RAID functions disabled.[35]

ZFS's approach: RAID-Z and mirroring

Instead of hardware RAID, ZFS employs "soft" RAID, offering RAID-Z (parity based like RAID 5 and similar) and disk mirroring (similar to RAID 1). The schemes are highly flexible.

RAID-Z is a data/parity distribution scheme like RAID-5, but uses dynamic stripe width: every block is its own RAID stripe, regardless of blocksize, resulting in every RAID-Z write being a full-stripe write. This, when combined with the copy-on-write transactional semantics of ZFS, eliminates the write hole error. RAID-Z is also faster than traditional RAID 5 because it does not need to perform the usual read-modify-write sequence.[36]

As all stripes are of different sizes, RAID-Z reconstruction has to traverse the filesystem metadata to determine the actual RAID-Z geometry. This would be impossible if the filesystem and the RAID array were separate products, whereas it becomes feasible when there is an integrated view of the logical and physical structure of the data. Going through the metadata means that ZFS can validate every block against its 256-bit checksum as it goes, whereas traditional RAID products usually cannot do this.

In addition to handling whole-disk failures, RAID-Z can also detect and correct silent data corruption, offering "self-healing data": when reading a RAID-Z block, ZFS compares it against its checksum, and if the data disks did not return the right answer, ZFS reads the parity and then figures out which disk returned bad data. Then, it repairs the damaged data and returns good data to the requestor.

RAID-Z and mirroring do not require any special hardware: they do not need NVRAM for reliability, and they do not need write buffering for good performance or data protection. With RAID-Z, ZFS provides fast, reliable storage using cheap, commodity disks.

There are five different RAID-Z modes: striping (similar to RAID 0, offers no redundancy), RAID-Z1 (similar to RAID 5, allows one disk to fail), RAID-Z2 (similar to RAID 6, allows two disks to fail), RAID-Z3 (a RAID 7 configuration, allows three disks to fail), and mirroring (similar to RAID 1, allows all but one disk to fail).[37]

The need for RAID-Z3 arose in the early 2000s as multi-terabyte capacity drives became more common. This increase in capacity—without a corresponding increase in throughput speeds—meant that rebuilding an array due to a failed drive could "easily take weeks or months" to complete. During this time, the older disks in the array will be stressed by the additional workload, which could result in data corruption or drive failure. By increasing parity, RAID-Z3 reduces the chance of data loss by simply increasing redundancy.[38]

Resilvering and scrub (array syncing and integrity checking)

ZFS has no tool equivalent to fsck (the standard Unix and Linux data checking and repair tool for file systems).[39] Instead, ZFS has a built-in scrub function which regularly examines all data and repairs silent corruption and other problems. Some differences are:

The official recommendation from Sun/Oracle is to scrub enterprise-level disks once a month, and cheaper commodity disks once a week.[40] [41]

Capacity

ZFS is a 128-bit file system,[42] so it can address 1.84 × 1019 times more data than 64-bit systems such as Btrfs. The maximum limits of ZFS are designed to be so large that they should never be encountered in practice. For instance, fully populating a single zpool with 2128 bits of data would require 3×1024 TB hard disk drives.[43]

Some theoretical limits in ZFS are:

Encryption

With Oracle Solaris, the encryption capability in ZFS[45] is embedded into the I/O pipeline. During writes, a block may be compressed, encrypted, checksummed and then deduplicated, in that order. The policy for encryption is set at the dataset level when datasets (file systems or ZVOLs) are created. The wrapping keys provided by the user/administrator can be changed at any time without taking the file system offline. The default behaviour is for the wrapping key to be inherited by any child data sets. The data encryption keys are randomly generated at dataset creation time. Only descendant datasets (snapshots and clones) share data encryption keys.[46] A command to switch to a new data encryption key for the clone or at any time is provided—this does not re-encrypt already existing data, instead utilising an encrypted master-key mechanism.

the encryption feature is also fully integrated into OpenZFS 0.8.0 available for Debian and Ubuntu Linux distributions.[47]

There have been anecdotal end-user reports of failures when using ZFS native encryption. An exact cause has not been established.[48] [49]

Read/write efficiency

ZFS will automatically allocate data storage across all vdevs in a pool (and all devices in each vdev) in a way that generally maximises the performance of the pool. ZFS will also update its write strategy to take account of new disks added to a pool, when they are added.

As a general rule, ZFS allocates writes across vdevs based on the free space in each vdev. This ensures that vdevs which have proportionately less data already, are given more writes when new data is to be stored. This helps to ensure that as the pool becomes more used, the situation does not develop that some vdevs become full, forcing writes to occur on a limited number of devices. It also means that when data is read (and reads are much more frequent than writes in most uses), different parts of the data can be read from as many disks as possible at the same time, giving much higher read performance. Therefore, as a general rule, pools and vdevs should be managed and new storage added, so that the situation does not arise that some vdevs in a pool are almost full and others almost empty, as this will make the pool less efficient.

Free space in ZFS tends to become fragmented with usage. ZFS does not have a mechanism for defragmenting free space. There are anecdotal end-user reports of diminished performance when high free-space fragmentation is coupled with disk space over-utilization.[50] [51]

Other features

Storage devices, spares, and quotas

Pools can have hot spares to compensate for failing disks. When mirroring, block devices can be grouped according to physical chassis, so that the filesystem can continue in the case of the failure of an entire chassis.

Storage pool composition is not limited to similar devices, but can consist of ad-hoc, heterogeneous collections of devices, which ZFS seamlessly pools together, subsequently doling out space to datasets (file system instances or ZVOLs) as needed. Arbitrary storage device types can be added to existing pools to expand their size.[52]

The storage capacity of all vdevs is available to all of the file system instances in the zpool. A quota can be set to limit the amount of space a file system instance can occupy, and a reservation can be set to guarantee that space will be available to a file system instance.

Caching mechanisms: ARC, L2ARC, Transaction groups, ZIL, SLOG, Special VDEV

ZFS uses different layers of disk cache to speed up read and write operations. Ideally, all data should be stored in RAM, but that is usually too expensive. Therefore, data is automatically cached in a hierarchy to optimize performance versus cost;[53] these are often called "hybrid storage pools".[54] Frequently accessed data will be stored in RAM, and less frequently accessed data can be stored on slower media, such as solid-state drives (SSDs). Data that is not often accessed is not cached and left on the slow hard drives. If old data is suddenly read a lot, ZFS will automatically move it to SSDs or to RAM.

ZFS caching mechanisms include one each for reads and writes, and in each case, two levels of caching can exist, one in computer memory (RAM) and one on fast storage (usually solid-state drives (SSDs)), for a total of four caches.

 Where storedRead cacheWrite cache
First level cacheIn RAMKnown as ARC, due to its use of a variant of the adaptive replacement cache (ARC) algorithm. RAM will always be used for caching, thus this level is always present. The efficiency of the ARC algorithm means that disks will often not need to be accessed, provided the ARC size is sufficiently large. If RAM is too small there will hardly be any ARC at all; in this case, ZFS always needs to access the underlying disks, which impacts performance, considerably.Handled by means of "transaction groups" – writes are collated over a short period (typically 5 – 30 seconds) up to a given limit, with each group being written to disk ideally while the next group is being collated. This allows writes to be organized more efficiently for the underlying disks at the risk of minor data loss of the most recent transactions upon power interruption or hardware fault. In practice the power loss risk is avoided by ZFS write journaling and by the SLOG/ZIL second tier write cache pool (see below), so writes will only be lost if a write failure happens at the same time as a total loss of the second tier SLOG pool, and then only when settings related to synchronous writing and SLOG use are set in a way that would allow such a situation to arise. If data is received faster than it can be written, data receipt is paused until the disks can catch up.
Second level cache & Intent logOn fast storage devices (which can be added or removed from a "live" system without disruption in current versions of ZFS, although not always in older versions)Known as L2ARC ("Level 2 ARC"), optional. ZFS will cache as much data in L2ARC as it can, which can be tens or hundreds of gigabytes in many cases. L2ARC will also considerably speed up deduplication if the entire deduplication table can be cached in L2ARC. It can take several hours to fully populate the L2ARC from empty (before ZFS has decided which data are "hot" and should be cached). If the L2ARC device is lost, all reads will go out to the disks which slows down performance, but nothing else will happen (no data will be lost).Known as SLOG or ZIL ("ZFS Intent Log") – the terms are often used incorrectly. A SLOG (secondary log device) is an optional dedicated cache on a separate device, for recording writes, in the event of a system issue. If an SLOG device exists, it will be used for the ZFS Intent Log as a second level log, and if no separate cache device is provided, the ZIL will be created on the main storage devices instead. The SLOG thus, technically, refers to the dedicated disk to which the ZIL is offloaded, in order to speed up the pool. Strictly speaking, ZFS does not use the SLOG device to cache its disk writes. Rather, it uses SLOG to ensure writes are captured to a permanent storage medium as quickly as possible, so that in the event of power loss or write failure, no data which was acknowledged as written, will be lost. The SLOG device allows ZFS to speedily store writes and quickly report them as written, even for storage devices such as HDDs that are much slower. In the normal course of activity, the SLOG is never referred to or read, and it does not act as a cache; its purpose is to safeguard data in flight during the few seconds taken for collation and "writing out", in case the eventual write were to fail. If all goes well, then the storage pool will be updated at some point within the next 5 to 60 seconds, when the current transaction group is written out to disk (see above), at which point the saved writes on the SLOG will simply be ignored and overwritten. If the write eventually fails, or the system suffers a crash or fault preventing its writing, then ZFS can identify all the writes that it has confirmed were written, by reading back the SLOG (the only time it is read from), and use this to completely repair the data loss.

This becomes crucial if a large number of synchronous writes take place (such as with ESXi, NFS and some databases),[55] where the client requires confirmation of successful writing before continuing its activity; the SLOG allows ZFS to confirm writing is successful much more quickly than if it had to write to the main store every time, without the risk involved in misleading the client as to the state of data storage. If there is no SLOG device then part of the main data pool will be used for the same purpose, although this is slower.

If the log device itself is lost, it is possible to lose the latest writes, therefore the log device should be mirrored. In earlier versions of ZFS, loss of the log device could result in loss of the entire zpool, although this is no longer the case. Therefore, one should upgrade ZFS if planning to use a separate log device.

A number of other caches, cache divisions, and queues also exist within ZFS. For example, each VDEV has its own data cache, and the ARC cache is divided between data stored by the user and metadata used by ZFS, with control over the balance between these.

Special VDEV Class

In OpenZFS 0.8 and later, it is possible to configure a Special VDEV class to preferentially store filesystem metadata, and optionally the Data Deduplication Table (DDT), and small filesystem blocks. This allows, for example, to create a Special VDEV on fast solid-state storage to store the metadata, while the regular file data is stored on spinning disks. This speeds up metadata-intensive operations such as filesystem traversal, scrub, and resilver, without the expense of storing the entire filesystem on solid-state storage.

Copy-on-write transactional model

ZFS uses a copy-on-write transactional object model. All block pointers within the filesystem contain a 256-bit checksum or 256-bit hash (currently a choice between Fletcher-2, Fletcher-4, or SHA-256)[56] of the target block, which is verified when the block is read. Blocks containing active data are never overwritten in place; instead, a new block is allocated, modified data is written to it, then any metadata blocks referencing it are similarly read, reallocated, and written. To reduce the overhead of this process, multiple updates are grouped into transaction groups, and ZIL (intent log) write cache is used when synchronous write semantics are required. The blocks are arranged in a tree, as are their checksums (see Merkle signature scheme).

Snapshots and clones

An advantage of copy-on-write is that, when ZFS writes new data, the blocks containing the old data can be retained, allowing a snapshot version of the file system to be maintained. ZFS snapshots are consistent (they reflect the entire data as it existed at a single point in time), and can be created extremely quickly, since all the data composing the snapshot is already stored, with the entire storage pool often snapshotted several times per hour. They are also space efficient, since any unchanged data is shared among the file system and its snapshots. Snapshots are inherently read-only, ensuring they will not be modified after creation, although they should not be relied on as a sole means of backup. Entire snapshots can be restored and also files and directories within snapshots.

Writeable snapshots ("clones") can also be created, resulting in two independent file systems that share a set of blocks. As changes are made to any of the clone file systems, new data blocks are created to reflect those changes, but any unchanged blocks continue to be shared, no matter how many clones exist. This is an implementation of the Copy-on-write principle.

Sending and receiving snapshots

ZFS file systems can be moved to other pools, also on remote hosts over the network, as the send command creates a stream representation of the file system's state. This stream can either describe complete contents of the file system at a given snapshot, or it can be a delta between snapshots. Computing the delta stream is very efficient, and its size depends on the number of blocks changed between the snapshots. This provides an efficient strategy, e.g., for synchronizing offsite backups or high availability mirrors of a pool.

Dynamic striping

Dynamic striping across all devices to maximize throughput means that as additional devices are added to the zpool, the stripe width automatically expands to include them; thus, all disks in a pool are used, which balances the write load across them.[57]

Variable block sizes

ZFS uses variable-sized blocks, with 128 KB as the default size. Available features allow the administrator to tune the maximum block size which is used, as certain workloads do not perform well with large blocks. If data compression is enabled, variable block sizes are used. If a block can be compressed to fit into a smaller block size, the smaller size is used on the disk to use less storage and improve IO throughput (though at the cost of increased CPU use for the compression and decompression operations).[58]

Lightweight filesystem creation

In ZFS, filesystem manipulation within a storage pool is easier than volume manipulation within a traditional filesystem; the time and effort required to create or expand a ZFS filesystem is closer to that of making a new directory than it is to volume manipulation in some other systems.

Adaptive endianness

Pools and their associated ZFS file systems can be moved between different platform architectures, including systems implementing different byte orders. The ZFS block pointer format stores filesystem metadata in an endian-adaptive way; individual metadata blocks are written with the native byte order of the system writing the block. When reading, if the stored endianness does not match the endianness of the system, the metadata is byte-swapped in memory.

This does not affect the stored data; as is usual in POSIX systems, files appear to applications as simple arrays of bytes, so applications creating and reading data remain responsible for doing so in a way independent of the underlying system's endianness.

Deduplication

Data deduplication capabilities were added to the ZFS source repository at the end of October 2009,[59] and relevant OpenSolaris ZFS development packages have been available since December 3, 2009 (build 128).

Effective use of deduplication may require large RAM capacity; recommendations range between 1 and 5 GB of RAM for every TB of storage.[60] [61] [62] An accurate assessment of the memory required for deduplication is made by referring to the number of unique blocks in the pool, and the number of bytes on disk and in RAM ("core") required to store each record—these figures are reported by inbuilt commands such as zpool and zdb. Insufficient physical memory or lack of ZFS cache can result in virtual memory thrashing when using deduplication, which can cause performance to plummet, or result in complete memory starvation. Because deduplication occurs at write-time, it is also very CPU-intensive and this can also significantly slow down a system.

Other storage vendors use modified versions of ZFS to achieve very high data compression ratios. Two examples in 2012 were GreenBytes[63] and Tegile.[64] In May 2014, Oracle bought GreenBytes for its ZFS deduplication and replication technology.[65]

As described above, deduplication is usually not recommended due to its heavy resource requirements (especially RAM) and impact on performance (especially when writing), other than in specific circumstances where the system and data are well-suited to this space-saving technique.

Additional capabilities

Limitations

Data recovery

ZFS does not ship with tools such as fsck, because the file system itself was designed to self-repair. So long as a storage pool had been built with sufficient attention to the design of storage and redundancy of data, basic tools like fsck were never required. However, if the pool was compromised because of poor hardware, inadequate design or redundancy, or unfortunate mishap, to the point that ZFS was unable to mount the pool, traditionally, there were no other, more advanced, tools which allowed an end-user to attempt partial salvage of the stored data from a badly corrupted pool.

Modern ZFS has improved considerably on this situation over time, and continues to do so:

OpenZFS and ZFS

Oracle Corporation ceased the public development of both ZFS and OpenSolaris after the acquisition of Sun in 2010. Some developers forked the last public release of OpenSolaris as the Illumos project. Because of the significant advantages present in ZFS, it has been ported to several different platforms with different features and commands. For coordinating the development efforts and to avoid fragmentation, OpenZFS was founded in 2013.

According to Matt Ahrens, one of the main architects of ZFS, over 50% of the original OpenSolaris ZFS code has been replaced in OpenZFS with community contributions as of 2019, making “Oracle ZFS” and “OpenZFS” politically and technologically incompatible.[85]

Commercial and open source products

Oracle Corporation, closed source, and forking (from 2010)

In January 2010, Oracle Corporation acquired Sun Microsystems, and quickly discontinued the OpenSolaris distribution and the open source development model.[93] [94] In August 2010, Oracle discontinued providing public updates to the source code of the Solaris OS/Networking repository, effectively turning Solaris 11 back into a closed source proprietary operating system.[95]

In response to the changing landscape of Solaris and OpenSolaris, the illumos project was launched via webinar[96] on Thursday, August 3, 2010, as a community effort of some core Solaris engineers to continue developing the open source version of Solaris, and complete the open sourcing of those parts not already open sourced by Sun.[97] illumos was founded as a Foundation, the Illumos Foundation, incorporated in the State of California as a 501(c)6 trade association. The original plan explicitly stated that illumos would not be a distribution or a fork. However, after Oracle announced discontinuing OpenSolaris, plans were made to fork the final version of the Solaris ON, allowing illumos to evolve into an operating system of its own.[98] As part of OpenSolaris, an open source version of ZFS was therefore integral within illumos.

ZFS was widely used within numerous platforms, as well as Solaris. Therefore, in 2013, the co-ordination of development work on the open source version of ZFS was passed to an umbrella project, OpenZFS. The OpenZFS framework allows any interested parties to collaboratively develop the core ZFS codebase in common, while individually maintaining any specific extra code which ZFS requires to function and integrate within their own systems.

Version history

ZFS Filesystem Version NumberRelease dateSignificant changes
OpenSolaris Nevada[99] build 36First release
OpenSolaris Nevada b69Enhanced directory entries. In particular, directory entries now store the object type. For example, file, directory, named pipe, and so on, in addition to the object number.
OpenSolaris Nevada b77Support for sharing ZFS file systems over SMB. Case insensitivity support. System attribute support. Integrated anti-virus support.
OpenSolaris Nevada b114Properties: userquota, groupquota, userused and groupused
OpenSolaris Nevada b137System attributes; symlinks now their own object type
ZFS Pool Version NumberRelease dateSignificant changes
OpenSolaris Nevada b36First release
OpenSolaris Nevada b38Ditto Blocks
OpenSolaris Nevada b42Hot spares, double-parity RAID-Z (raidz2), improved RAID-Z accounting
OpenSolaris Nevada b62zpool history
OpenSolaris Nevada b62gzip compression for ZFS datasets
OpenSolaris Nevada b62"bootfs" pool property
OpenSolaris Nevada b68ZIL: adds the capability to specify a separate Intent Log device or devices
OpenSolaris Nevada b69ability to delegate zfs(1M) administrative tasks to ordinary users
OpenSolaris Nevada b77CIFS server support, dataset quotas
OpenSolaris Nevada b77Devices can be added to a storage pool as "cache devices"
OpenSolaris Nevada b94Improved zpool scrub / resilver performance
OpenSolaris Nevada b96Snapshot properties
OpenSolaris Nevada b98Properties: usedbysnapshots, usedbychildren, usedbyrefreservation, and usedbydataset
OpenSolaris Nevada b103passthrough-x aclinherit property support
OpenSolaris Nevada b114Properties: userquota, groupquota, usuerused and groupused; also required FS v4
OpenSolaris Nevada b116STMF property support
OpenSolaris Nevada b120triple-parity RAID-Z
OpenSolaris Nevada b121ZFS snapshot holds
OpenSolaris Nevada b125ZFS log device removal
OpenSolaris Nevada b128zle compression algorithm that is needed to support the ZFS deduplication properties in ZFS pool version 21, which were released concurrently
OpenSolaris Nevada b128Deduplication
OpenSolaris Nevada b128zfs receive properties
OpenSolaris Nevada b135slim ZIL
OpenSolaris Nevada b137System attributes. Symlinks now their own object type. Also requires FS v5.
OpenSolaris Nevada b140Improved pool scrubbing and resilvering statistics
OpenSolaris Nevada b141Improved snapshot deletion performance
OpenSolaris Nevada b145Improved snapshot creation performance (particularly recursive snapshots)
OpenSolaris Nevada b147Multiple virtual device replacements

Note: The Solaris version under development by Sun since the release of Solaris 10 in 2005 was codenamed 'Nevada', and was derived from what was the OpenSolaris codebase. 'Solaris Nevada' is the codename for the next-generation Solaris OS to eventually succeed Solaris 10 and this new code was then pulled successively into new OpenSolaris 'Nevada' snapshot builds.[99] OpenSolaris is now discontinued and OpenIndiana forked from it.[100] [101] A final build (b134) of OpenSolaris was published by Oracle (2010-Nov-12) as an upgrade path to Solaris 11 Express.

Operating system support

List of Operating Systems, distributions and add-ons that support ZFS, the zpool version it supports, and the Solaris build they are based on (if any):

OSZpool versionSun/Oracle Build #Comments
Oracle Solaris 11.44911.4.51 (11.4 SRU 51)[102]
Oracle Solaris 11.3370.5.11-0.175.3.1.0.5.0
Oracle Solaris 10 1/13 (U11)32
Oracle Solaris 11.2350.5.11-0.175.2.0.0.42.0
Oracle Solaris 11 2011.1134b175
Oracle Solaris Express 11 2010.1131b151alicensed for testing only
OpenSolaris 2009.0614b111b
OpenSolaris (last dev)22b134
OpenIndiana5000b147distribution based on illumos; creates a name clash naming their build code 'b151a'
Nexenta Core 3.0.126b134+GNU userland
NexentaStor Community 3.0.126b134+up to 18 TB, web admin
NexentaStor Community 3.1.028b134+GNU userland
NexentaStor Community 4.05000b134+up to 18 TB, web admin
NexentaStor Enterprise28b134 +not free, web admin
GNU/kFreeBSD "Squeeze" (Unsupported)14Requires package "zfsutils"
GNU/kFreeBSD "Wheezy-9" (Unsupported)28Requires package "zfsutils"
FreeBSD5000
zfs-fuse 0.7.223suffered from performance issues; defunct
ZFS on Linux 0.6.5.850000.6.0 release candidate has POSIX layer
KQ Infotech's ZFS on Linux28defunct; code integrated into LLNL-supported ZFS on Linux
BeleniX 0.8b114b111small-size live-CD distribution; once based on OpenSolaris
Schillix 0.7.228b147small-size live-CD distribution; as SchilliX-ON 0.8.0 based on OpenSolaris
StormOS "hail"distribution once based on Nexenta Core 2.0+, Debian Linux; superseded by Dyson OS
JarisJapanese Solaris distribution; once based on OpenSolaris
MilaX 0.520b128asmall-size live-CD distribution; once based on OpenSolaris
FreeNAS 8.0.2 / 8.215
FreeNAS 8.3.028based on FreeBSD 8.3
FreeNAS 9.1.0+5000based on FreeBSD 9.1+
XigmaNAS 11.4.0.4/12.2.0.45000based on FreeBSD 11.4/12.2
Korona 4.5.022b134KDE
EON NAS (v0.6)22b130embedded NAS
EON NAS (v1.0beta)28b151aembedded NAS
napp-it28/5000Illumos/SolarisStorage appliance; OpenIndiana (Hipster), OmniOS, Solaris 11, Linux (ZFS management)
OmniOS CE28/5000illumos-OmniOS branchminimal stable/LTS storage server distribution based on Illumos, community driven
SmartOS28/5000Illumos b151+minimal live distribution based on Illumos (USB/CD boot); cloud and hypervisor use (KVM)
macOS 10.5, 10.6, 10.7, 10.8, 10.95000via MacZFS; superseded by OpenZFS on OS X
macOS 10.6, 10.7, 10.828via ZEVO; superseded by OpenZFS on OS X
NetBSD22
MidnightBSD6
Proxmox VE5000native support since 2014, pve.proxmox.com/wiki/ZFS_on_Linux
Ubuntu Linux 16.04 LTS+5000native support via installable binary module, wiki.ubuntu.com/ZFS
ZFSGuru 10.1.1005000

See also

Bibliography

External links

Notes and References

  1. Web site: ZFS on Linux Licensing . . 2020-05-17.
  2. Web site: The OpenZFS project launches . September 17, 2013 . October 1, 2013 . . https://web.archive.org/web/20131004215341/http://lwn.net/Articles/567090/ . October 4, 2013 . live.
  3. Web site: OpenZFS Announcement . 2013-09-17 . 2013-09-19 . . https://web.archive.org/web/20180402091425/http://open-zfs.org/wiki/Announcement . April 2, 2018 . live.
  4. open-zfs.org /History "OpenZFS is the truly open source successor to the ZFS project [...] Effects of the fork (2010 to date)"
  5. Web site: LinuxCon: OpenZFS moves Open Source Storage Forward . 2013-09-18 . 2013-10-09 . Sean Michael Kerner . infostor.com . https://web.archive.org/web/20140314145457/http://www.infostor.com/storage-management/linuxcon-openzfs-moves-open-source-storage-forward.html . March 14, 2014 . live.
  6. Web site: The OpenZFS project launches . 2013-09-17 . 2013-10-01 . . https://web.archive.org/web/20161011141200/https://lwn.net/Articles/567090/ . October 11, 2016 . live.
  7. Web site: OpenZFS – Communities co-operating on ZFS code and features . 2013-09-23 . 2014-03-14 . freebsdnews.net . https://web.archive.org/web/20131014000145/http://www.freebsdnews.net/2013/09/23/openzfs-communities-co-operating-on-zfs-code-and-features/ . October 14, 2013 . live.
  8. Web site: The Starline ZFS FAQ . Starline . 20 July 2024.
  9. Web site: 19.4. zfs Administration. www.freebsd.org. February 22, 2017. https://web.archive.org/web/20170223045940/https://www.freebsd.org/doc/handbook/zfs-zfs.html. February 23, 2017. live.
  10. Book: Salus , Peter . A Quarter Century of Unix . Addison-Wesley . 1994 . 0-201-54777-5 . 199–200.
  11. Web site: What are SunOS and Solaris? . November 10, 2014 . May 20, 2013 . Knowledge Base . Indiana University Technology Services.
  12. Web site: Brown. David. A Conversation with Jeff Bonwick and Bill Moore. ACM Queue. Association for Computing Machinery. November 17, 2015. July 16, 2011. https://web.archive.org/web/20110716221142/http://queue.acm.org/detail.cfm?id=1317400. live.
  13. Web site: ZFS: the last word in file systems . April 30, 2006 . Sun Microsystems . September 14, 2004 . https://web.archive.org/web/20060428092023/http://www.sun.com/2004-0914/feature/ . April 28, 2006.
  14. Web site: ZFS 10 year anniversary. Matthew Ahrens. November 1, 2011. July 24, 2012. https://web.archive.org/web/20160628084029/http://blog.delphix.com/matt/2011/11/01/zfs-10-year-anniversary/. June 28, 2016. dead.
  15. Web site: ZFS: The Last Word in Filesystems . Jeff . Bonwick . blogs.oracle.com . October 31, 2005 . June 22, 2013 . https://web.archive.org/web/20130619165135/https://blogs.oracle.com/bonwick/en_US/entry/zfs_the_last_word_in . June 19, 2013 . dead.
  16. Web site: Sun Celebrates Successful One-Year Anniversary of OpenSolaris . Sun Microsystems . June 20, 2006 . April 30, 2018 . September 28, 2008 . https://web.archive.org/web/20080928001733/http://www.sun.com/smi/Press/sunflash/2006-06/sunflash.20060620.1.xml . live .
  17. Web site: Michael Singer . January 25, 2005 . Sun Cracks Open Solaris . InternetNews.com . April 12, 2010.
  18. Web site: ZFS FAQ at OpenSolaris.org . Sun Microsystems . May 18, 2011 . The largest SI prefix we liked was 'zetta' ('yotta' was out of the question) . dead . https://web.archive.org/web/20110515061128/http://hub.opensolaris.org/bin/view/Community+Group+zfs/faq/ . May 15, 2011.
  19. Web site: You say zeta, I say zetta . Jeff Bonwick . Jeff Bonwick's Blog . May 3, 2006 . April 21, 2017 . So we finally decided to unpimp the name back to ZFS, which doesn't stand for anything. . dead . https://web.archive.org/web/20170223222515/https://blogs.oracle.com/bonwick/en_US/entry/you_say_zeta_i_say . February 23, 2017.
  20. Web site: Oracle and NetApp dismiss ZFS lawsuits . 2010-09-09 . 2013-12-24 . theregister.co.uk . September 9, 2017 . https://web.archive.org/web/20170909065736/http://www.theregister.co.uk/2010/09/09/oracle_netapp_zfs_dismiss/ . live .
  21. The Extended file system (Ext) has metadata structure copied from UFS.Web site: Rémy Card (Interview, April 1998) . April 19, 1999 . April Association . 2012-02-08 . https://web.archive.org/web/20120204082557/http://www.april.org/groupes/entretiens/remy_card.html . February 4, 2012 . dead . (In French)
  22. Web site: IRON FILE SYSTEMS . Doctor of Philosophy in Computer Sciences . University of Wisconsin-Madison . 9 June 2012 . Vijayan Prabhakaran . 2006 . https://web.archive.org/web/20110429011617/http://pages.cs.wisc.edu/~vijayan/vijayan-thesis.pdf . April 29, 2011 . live .
  23. Web site: Parity Lost and Parity Regained. November 29, 2010. https://web.archive.org/web/20100615101314/http://www.cs.wisc.edu/adsl/Publications/parity-fast08.html. June 15, 2010. live.
  24. Web site: An Analysis of Data Corruption in the Storage Stack . November 29, 2010 . https://web.archive.org/web/20100615111630/http://www.cs.wisc.edu/adsl/Publications/corruption-fast08.pdf . June 15, 2010 . live .
  25. Web site: Impact of Disk Corruption on Open-Source DBMS. November 29, 2010. https://web.archive.org/web/20100615090935/http://www.cs.wisc.edu/adsl/Publications/corrupt-mysql-icde10.pdf. June 15, 2010. live.
  26. Web site: Reliability Analysis of ZFS. Asim. Kadav. Abhishek. Rajimwale. September 19, 2013. https://web.archive.org/web/20130921054610/http://pages.cs.wisc.edu/~kadav/zfs/zfsrel.pdf. September 21, 2013. live.
  27. 2010-12-06.
  28. Web site: Larabel. Michael. Benchmarking ZFS and UFS On FreeBSD vs. EXT4 & Btrfs On Linux. Phoronix Media 2012. 21 November 2012. https://web.archive.org/web/20161129093628/https://www.phoronix.com/scan.php?page=article&item=zfs_ext4_btrfs&num=2. November 29, 2016. live.
  29. Web site: Larabel. Michael. Can DragonFlyBSD's HAMMER Compete With Btrfs, ZFS?. Phoronix Media 2012. 21 November 2012. https://web.archive.org/web/20161129033518/https://www.phoronix.com/scan.php?page=article&item=dragonfly_hammer&num=3. November 29, 2016. live.
  30. Web site: ZFS End-to-End Data Integrity . 2005-12-08 . 2013-09-19 . Jeff . Bonwick . blogs.oracle.com . https://web.archive.org/web/20120403015447/https://blogs.oracle.com/bonwick/entry/zfs_end_to_end_data . April 3, 2012 . live .
  31. Web site: Demonstrating ZFS Self-Healing . November 16, 2009 . 2015-02-01 . Tim . Cook . blogs.oracle.com . https://web.archive.org/web/20110812031213/http://blogs.oracle.com/timc/entry/demonstrating_zfs_self_healing . August 12, 2011 . live .
  32. Web site: ZFS, copies, and data protection . 2007-05-04 . 2015-02-02 . Richard . Ranch . blogs.oracle.com . https://web.archive.org/web/20160818143115/https://blogs.oracle.com/relling/entry/zfs_copies_and_data_protection . August 18, 2016 . dead .
  33. Web site: zpoolconcepts.7 — OpenZFS documentation . 2023-04-05 . openzfs.github.io.
  34. Web site: Dec 2015. ZFS Without Tears: Using ZFS without ECC memory. live. https://web.archive.org/web/20210113202506/https://www.csparks.com/ZFS%20Without%20Tears.md#toc_using-zfs-without-ecc-memory. January 13, 2021. 2020-06-16. www.csparks.com.
  35. Web site: Difference between Desktop edition and RAID (Enterprise) edition drives . wdc.custhelp.com . September 8, 2011 . https://web.archive.org/web/20150105040018/http://wdc.custhelp.com/app/answers/detail/a_id/1397/~/difference-between-desktop-edition-and-raid-(enterprise)-edition-drives . January 5, 2015 . live .
  36. Web site: RAID-Z . Jeff Bonwick's Blog . Oracle Blogs . 2005-11-17 . 2015-02-01 . Jeff . Bonwick . https://web.archive.org/web/20141216015058/https://blogs.oracle.com/bonwick/en_US/entry/raid_z . December 16, 2014 . dead .
  37. Web site: ZFS Raidz Performance, Capacity and integrity. calomel.org. 23 June 2017. https://web.archive.org/web/20171127225445/https://calomel.org/zfs_raid_speed_capacity.html. November 27, 2017. dead.
  38. Web site: Why RAID 6 stops working in 2019 . February 22, 2010 . . October 26, 2014 . https://web.archive.org/web/20141031164950/http://www.zdnet.com/blog/storage/why-raid-6-stops-working-in-2019/805 . October 31, 2014 . dead .
  39. "No fsck utility equivalent exists for ZFS. This utility has traditionally served two purposes, those of file system repair and file system validation." Web site: Checking ZFS File System Integrity . Oracle . 25 November 2012 . https://web.archive.org/web/20130131040337/http://docs.oracle.com/cd/E23823_01/html/819-5461/gbbwa.html . January 31, 2013 . live .
  40. Web site: ZFS Scrubs . freenas.org . 25 November 2012 . dead . https://web.archive.org/web/20121127160745/http://doc.freenas.org/index.php/ZFS_Scrubs . November 27, 2012 .
  41. "You should also run a scrub prior to replacing devices or temporarily reducing a pool's redundancy to ensure that all devices are currently operational." Web site: ZFS Best Practices Guide . solarisinternals.com . 25 November 2012 . dead . https://web.archive.org/web/20150905142644/http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide . September 5, 2015.
  42. Web site: 128-bit storage: are you high? . Jeff Bonwick . oracle.com . May 29, 2015 . https://web.archive.org/web/20150529160107/https://blogs.oracle.com/bonwick/entry/128_bit_storage_are_you . May 29, 2015 . live .
  43. Web site: ZFS: Boils the Ocean, Consumes the Moon (Dave Brillhart's Blog). December 19, 2015. https://web.archive.org/web/20151208192725/https://blogs.oracle.com/dcb/entry/zfs_boils_the_ocean_consumes. December 8, 2015. dead.
  44. Web site: Solaris ZFS Administration Guide . Oracle Corporation . February 11, 2011 . January 13, 2021 . https://web.archive.org/web/20210113202613/https://docs.oracle.com/cd/E19253-01/819-5461/zfsover-2/index.html . live .
  45. Web site: Encrypting ZFS File Systems . May 2, 2011 . https://web.archive.org/web/20110623190612/http://download.oracle.com/docs/cd/E19963-01/html/821-1448/gkkih.html . June 23, 2011 . live .
  46. Web site: Having my secured cake and Cloning it too (aka Encryption + Dedup with ZFS) . October 9, 2012 . https://web.archive.org/web/20130529061709/https://blogs.oracle.com/darren/entry/compress_encrypt_checksum_deduplicate_with . May 29, 2013 . live .
  47. Web site: ZFS – Debian Wiki. wiki.debian.org. 2019-12-10. https://web.archive.org/web/20190908104724/https://wiki.debian.org/ZFS#Encryption. September 8, 2019. live.
  48. Web site: Proposal: Consider adding warnings against using zfs native encryption along with send/recv in production . Github . Github . 15 August 2024.
  49. Web site: PSA: ZFS has a data corruption bug when using native encryption and send/recv . Reddit . Reddit . 15 August 2024.
  50. Web site: ZFS Fragmentation: Long-term Solutions . Github . Github . 15 August 2024.
  51. Web site: What are the best practices to keep ZFS from being too fragmented . Lawrence Systems . Lawrence Systems . 15 August 2024.
  52. Web site: Solaris ZFS Enables Hybrid Storage Pools—Shatters Economic and Performance Barriers . Sun.com . September 7, 2010 . November 4, 2011 . https://web.archive.org/web/20111017204544/http://download.intel.com/design/flash/nand/SolarisZFS_SolutionBrief.pdf . October 17, 2011 . live .
  53. Web site: ZFS L2ARC . Brendan . Gregg . Brendan's blog . Dtrace.org . 2012-10-05 . https://web.archive.org/web/20111106031228/http://dtrace.org/blogs/brendan/2008/07/22/zfs-l2arc/ . November 6, 2011 . live .
  54. Web site: Hybrid Storage Pool: Top Speeds . 2009-10-08 . Brendan . Gregg . Brendan's blog . Dtrace.org . August 15, 2017 . https://web.archive.org/web/20160405120351/http://dtrace.org/blogs/brendan/2009/10/08/hybrid-storage-pool-top-speeds/ . April 5, 2016 . live .
  55. Web site: Solaris ZFS Performance Tuning: Synchronous Writes and the ZIL . Constantin.glez.de . 2010-07-20 . 2012-10-05 . https://web.archive.org/web/20120623100347/http://constantin.glez.de/blog/2010/07/solaris-zfs-synchronous-writes-and-zil-explained . June 23, 2012 . live .
  56. Web site: ZFS On-Disk Specification . Sun Microsystems, Inc. . 2006 . dead . https://web.archive.org/web/20081230170058/http://www.opensolaris.org/os/community/zfs/docs/ondiskformat0822.pdf . December 30, 2008 . See section 2.4.
  57. Web site: RAIDZ — OpenZFS documentation . 2023-02-09 . openzfs.github.io.
  58. Web site: ZFS Nuts and Bolts . 2009-05-21 . 2014-06-08 . Eric Sproul . slideshare.net . 30–31 . https://web.archive.org/web/20140622215818/http://www.slideshare.net/esproul/zfs-nuts-and-bolts . June 22, 2014 . live .
  59. Web site: ZFS Deduplication . blogs.oracle.com . November 25, 2019 . https://web.archive.org/web/20191224020451/https://blogs.oracle.com/bonwick/zfs-deduplication-v2 . December 24, 2019 . live .
  60. Web site: Building ZFS Based Network Attached Storage Using FreeNAS 8. TrainSignal Training. TrainSignal, Inc. 9 June 2012. Gary Sims. Blog. 4 January 2012. https://web.archive.org/web/20120507220120/http://www.trainsignal.com/blog/zfs-nas-setup-guide. May 7, 2012. dead.
  61. Web site: [zfs-discuss] Summary: Deduplication Memory Requirements . https://web.archive.org/web/20120425142508/http://mail.opensolaris.org/pipermail/zfs-discuss/2011-May/048159.html . Ray Van Dolson . May 2011 . 2012-04-25 . zfs-discuss mailing list .
  62. Web site: ZFSTuningGuide . January 3, 2012 . https://web.archive.org/web/20120116113648/http://wiki.freebsd.org/ZFSTuningGuide#Deduplication . January 16, 2012 . live .
  63. News: GreenBytes brandishes full-fat clone VDI pumper . Chris Mellor . October 12, 2012 . The Register . August 29, 2013 . https://web.archive.org/web/20130324085407/http://www.theregister.co.uk/2012/10/12/greenbytes_chairman/ . March 24, 2013 . live .
  64. News: Newcomer gets out its box, plans to sell it cheaply to all comers . Chris Mellor . June 1, 2012 . The Register . August 29, 2013 . https://web.archive.org/web/20130812033031/http://www.theregister.co.uk/2012/06/01/tegile_zebi/ . August 12, 2013 . live .
  65. Web site: Dedupe, dedupe... dedupe, dedupe, dedupe: Oracle polishes ZFS diamond . 2014-12-11 . 2014-12-17 . Chris Mellor . . https://web.archive.org/web/20170707155821/https://www.theregister.co.uk/2014/12/11/oracle_improving_zfs_dedupe/ . July 7, 2017 . live .
  66. Web site: Checksums and Their Use in ZFS . github.com . Sep 2, 2018 . July 11, 2019 . https://web.archive.org/web/20190719225739/https://github.com/zfsonlinux/zfs/wiki/Checksums . July 19, 2019 . live .
  67. Web site: Solaris ZFS Administration Guide . Chapter 6 Managing ZFS File Systems . March 17, 2009 . dead . https://web.archive.org/web/20110205111337/http://download.oracle.com/docs/cd/E19963-01/821-1448/gavwq/index.html . February 5, 2011 .
  68. Web site: Smokin' Mirrors . blogs.oracle.com . May 2, 2006 . February 13, 2012 . https://web.archive.org/web/20111216163425/http://blogs.oracle.com/bonwick/entry/smokin_mirrors . December 16, 2011 . live .
  69. Web site: ZFS Block Allocation . Jeff Bonwick's Weblog . November 4, 2006 . February 23, 2007 . https://web.archive.org/web/20121102073644/https://blogs.oracle.com/bonwick/entry/zfs_block_allocation . November 2, 2012 . live .
  70. Web site: Ditto Blocks — The Amazing Tape Repellent . Flippin' off bits Weblog . May 12, 2006 . March 1, 2007 . https://web.archive.org/web/20130526084314/https://blogs.oracle.com/bill/entry/ditto_blocks_the_amazing_tape . May 26, 2013 . dead .
  71. Web site: Adding new disks and ditto block behaviour . October 19, 2009 . dead . https://web.archive.org/web/20110823190119/http://opensolaris.org/jive/thread.jspa?messageID=417776 . August 23, 2011 .
  72. Web site: OpenSolaris.org . Sun Microsystems . May 22, 2009 . dead . https://web.archive.org/web/20090508081240/http://www.opensolaris.org/os/community/zfs/version/15/ . May 8, 2009 .
  73. Web site: What's new in Solaris 11 Express 2010.11 . Oracle . November 17, 2010 . https://web.archive.org/web/20101116073641/http://www.oracle.com/technetwork/server-storage/solaris11/documentation/solaris-express-whatsnew-201011-175308.pdf . November 16, 2010 . live .
  74. Web site: Release zfs-0.8.0 . GitHub . OpenZFS . 2021-07-03 . 2019-05-23.
  75. Web site: 10. Sharing — FreeNAS User Guide 9.3 Table of Contents. doc.freenas.org. February 23, 2017. https://web.archive.org/web/20170107211538/http://doc.freenas.org/9.3/freenas_sharing.html. January 7, 2017. live.
  76. Web site: Bug ID 4852783: reduce pool capacity . OpenSolaris Project . March 28, 2009 . dead . https://web.archive.org/web/20090629081219/http://bugs.opensolaris.org/view_bug.do?bug_id=4852783 . June 29, 2009 .
  77. Permanently removing vdevs from a pool . zfs-discuss . April 19, 2007 . Mario . Goebbels . archive link
  78. Chris Siebenmann Information on future vdev removal, Univ Toronto, blog, quote: informal Twitter announcement by Alex Reece
  79. Web site: Data Management Features – What's New in Oracle® Solaris 11.4 . October 9, 2019 . https://web.archive.org/web/20190924101556/https://docs.oracle.com/cd/E37838_01/html/E60974/dmgmt.html#scrolltoc . September 24, 2019 . live .
  80. Web site: Expand-O-Matic RAID Z . Adam Leventhal . April 7, 2008 . April 16, 2012 . https://web.archive.org/web/20111228072550/http://blogs.oracle.com/ahl/entry/expand_o_matic_raid_z . December 28, 2011 . live .
  81. Web site: ZFS Toy . SourceForge.net . 2022-04-12.
  82. Web site: zpoolconcepts(7) . OpenZFS documentation . OpenZFS . 2021-04-12 . 2021-06-02 . Virtual devices cannot be nested, so a mirror or raidz virtual device can only contain files or disks. Mirrors of mirrors (or other combinations) are not allowed..
  83. Web site: zpool(1M) . Download.oracle.com . June 11, 2010 . November 4, 2011 . January 13, 2021 . https://web.archive.org/web/20210113202512/https://docs.oracle.com/cd/E19253-01/816-5166/zpool-1m/?l=en&n=1&a=view . live .
  84. Web site: Turbocharging ZFS Data Recovery . November 29, 2018 . https://web.archive.org/web/20181129054344/https://www.delphix.com/blog/openzfs-pool-import-recovery . November 29, 2018 . live .
  85. Web site: ZFS and OpenZFS . iXSystems . 2020-05-18 .
  86. Web site: Sun rolls out its own storage appliances . 2008-11-11 . 2013-11-13 . techworld.com.au . https://web.archive.org/web/20131113194325/http://www.techworld.com.au/article/266682/sun_rolls_its_own_storage_appliances/ . November 13, 2013 . live.
  87. Web site: Oracle muscles way into seat atop the benchmark with hefty ZFS filer . 2013-10-02 . 2014-07-07 . Chris Mellor . theregister.co.uk . https://web.archive.org/web/20170707160152/https://www.theregister.co.uk/2013/10/02/oracle_zs3/ . July 7, 2017 . live.
  88. Web site: Unified ZFS Storage Appliance built in Silicon Valley by iXsystem . 2014-07-07 . ixsystems.com . https://web.archive.org/web/20140703151518/http://www.ixsystems.com/storage/truenas/ . July 3, 2014 . live.
  89. Web site: TrueNAS 12 & TrueNAS SCALE are officially here! . 2021-01-02 . ixsystems.com .
  90. Web site: ReadyDATA 516 – Unified Network Storage . 2014-07-07 . netgear.com . https://web.archive.org/web/20140715002129/http://www.netgear.com/images/pdf/ReadyDATA_516_DS.pdf . July 15, 2014 . live.
  91. Web site: rsync.net: ZFS Replication to the cloud is finally here—and it's fast. 2017-08-21. 2015-12-17. Jim Salter. arstechnica.com. https://web.archive.org/web/20170822052447/https://arstechnica.com/information-technology/2015/12/rsync-net-zfs-replication-to-the-cloud-is-finally-here-and-its-fast/. August 22, 2017. live.
  92. Web site: Cloud Storage with ZFS send and receive over SSH. 2017-08-21. rsync.net, Inc.. rsync.net. https://web.archive.org/web/20170721090348/http://www.rsync.net/products/zfsintro.html. July 21, 2017. live.
  93. Web site: August 13, 2010 . Steven Stallion / Oracle . Update on SXCE . Iconoclastic Tendencies . April 30, 2018 . November 9, 2020 . https://web.archive.org/web/20201109033546/http://sstallion.blogspot.com/2010/08/opensolaris-is-dead.html . dead .
  94. OpenSolaris cancelled, to be replaced with Solaris 11 Express. osol-discuss. https://web.archive.org/web/20100816225601/http://mail.opensolaris.org/pipermail/opensolaris-discuss/2010-August/059310.html . Alasdair Lumsden. August 16, 2010. November 24, 2014.
  95. https://arstechnica.com/information-technology/2010/08/solaris-still-sorta-open-but-opensolaris-distro-is-dead/ Solaris still sorta open, but OpenSolaris distro is dead
  96. Web site: Garrett D'Amore . Illumos - Hope and Light Springs Anew - Presented by Garrett D'Amore . August 3, 2010 . illumos.org . August 3, 2010.
  97. Web site: Whither OpenSolaris? Illumos Takes Up the Mantle. https://web.archive.org/web/20150926053916/http://www.linuxinsider.com/story/76669.html . September 26, 2015.
  98. Web site: The Hand May Be Forced . Garrett D'Amore . August 13, 2010 . November 14, 2013.
  99. "While under Sun Microsystems' control, there were bi-weekly snapshots of Solaris Nevada (the codename for the next-generation Solaris OS to eventually succeed Solaris 10) and this new code was then pulled into new OpenSolaris preview snapshots available at Genunix.org. The stable releases of OpenSolaris are based off of these Nevada builds."Web site: Larabel . Michael . It Looks Like Oracle Will Stand Behind OpenSolaris . Phoronix Media . November 21, 2012 . https://web.archive.org/web/20161129011453/https://www.phoronix.com/scan.php?page=news_item&px=ODQyOQ . November 29, 2016 . live.
  100. Web site: OpenIndiana — there's still hope . Ljubuncic . Igor . May 23, 2011 . . November 21, 2012 . https://web.archive.org/web/20121027220918/http://distrowatch.com/weekly.php?issue=20110523#feature . October 27, 2012 . live.
  101. Web site: Welcome to Project OpenIndiana! . September 10, 2010 . Project OpenIndiana . September 14, 2010 . https://web.archive.org/web/20121127012553/http://openindiana.org/ . November 27, 2012 . live.
  102. Web site: ZFS Pool Versions . Jan 1, 2023 . Oracle Corporation . 2022 . https://web.archive.org/web/20221221174928/https://docs.oracle.com/en/operating-systems/solaris/oracle-solaris/11.4/manage-zfs/zfs-pool-versions.html . Dec 21, 2022 . live.