Zpool and ZFS versions.
ZFS was a shot at rearchitecting filesystem and volume manager into one, easily administered and flexible file system. Thus it introduces new concepts and terminology.
|boot environment||later abbreviated as BE, boot environments are complete operating system images on the hard disk, capable of booting up the system. When operating system is installed with ZFS as root filesystem, many boot environments can be had with use of ZFS clones. On systems that have this feature enabled (illumos based and FreeBSD installed specifically for root on ZFS) they are administered by the beadm command|
|checksum||a hash made of data block or metadata, ensuring the integrity of the data. Checksum algorithm can be chosen administratively|
|clone||a read only snapshot promoted to be a full filesystem. It becomes read-write and, for all intents and purposes, can be treated as a normal ZFS filesystem|
|copy on write||a method of writing blocks of data to physical media where changed block is written to entirely new location on media instead of overwriting existing data block|
dataset is one of following ZFS components: zpool, filesystem, snapshot, clone, volume. It is uniquely identified by zpool/path[@snapshot], where zpool is a ZFS disk pool where filesystem lives, ie. RPOOL, path is a complete path to the filesystem, ie. export/home, and snapshot is optional element that denotes dataset being a snapshot of the filesystem. Example full dataset identifier:
|filesystem||a dataset that is a filesystem, living with a zpool, mounted with standard filesystem hierarchy. It can be treated as any other filesystem within an operating system|
|l2arc||Lever 2 ARC (Adaptive Replacement Cache) is secondary cache. Hot cache is kept in memory. SSD devices can be placed between RAM and spindle disks, speeding up warm cache data fetch|
|resilver||rough equivalent of mirror resynchronization in traditional RAID solutions. After replacement of a broken disk, data will be properly copied over to the new disk in a pool, in a process called resilvering. It applies to all redundancy levels.|
|scrub||a process of touching each data block, generating a checksum and comparing it with stored checksum. This process ensures that any data corruption is discovered. If zpool is set with a redundancy, data will be copied over from source that keeps correct version, on the fly without manual intervention|
|snapshot||a read-only copy of a filesystem, initially exact copy of a filesystem being copied. Due to COW nature of ZFS, creation of a snapshot does not need any change to original filesystem (no remounting read-only etc). Note that filesystem integrity does not means data integrity, if an application is in process of modifying its data on disk, snapshot will capture file mid-change|
|vdev||a virtual device within a zpool, which can consist of files, physical disk or group of disks. vdev can also be created on SCSI and iSCSI LUNs, which is sometimes used by storage solutions|
|volume||a dataset that imitates block device, ie. for use as a swap device. One other use is to create a cryptographically scrambled filesystem with use of lofi|
|ZIL||ZFS Intent Log is a area of memory that holds data before it is flushed to the pool. ZIL can be moved onto external devices, most preferably small RAM disks with battery backup, ensuring that data from this write cache is not lost after power failure|
|zpool||a pool of disks, grouped together to create one, contiguous space for use by ZFS filesystems. zpool can have no redundancy or one of following levels of redundancy: RAIDZ, RAIDZ2, mirror, triple mirror or mirrors with even more levels of redundancy. All of them are briefly touched in preface and will be explained in later chapters|