ZFS file system
- Last updated: May 26, 2022
Intro
According to wikipedia, ZFS stand for Z File System, which is an open source file system and a logical volume manager licensed under the Common Development and Distribution License.
Designed by Sun Microsystems (acquired by Oracle in 2009), and had been developed by Jeff Bonwick.
Main features
- High storage capacity
- Volume management
- Data integrity
- Snapshots and clones
- Deduplication
- Compression
Installing
Follow instructions for Debian on zfsonlinux website : https://github.com
Commands
Create a ZFS pool
- Get disks ids:
root@host:~# ls -lah /dev/disk/by-id/
- Create mount point:
root@host:~# mkdir /zfs
- Create RAID 0:
root@host:~# zpool create -f -m <mount_point> <pool_name> <type> <ids>
root@host:~# zpool create -f -o ashift=12 -m /zfs raid0_01 mirror scsi-SATA_WDC_WD20EARS-07_WD-WCAZA796741 scsi-SATA_WDC_WD20EARS-07_WD-WCAZB7569258 scsi-SATA_WDC_WD20EARS-07_WD-WCPZB7464217
create
: Create pool-f
: Force pool creation. (Avoid "EFI label error" error).-o ashift=12
: align zpool on clusters size (current disks work with 4K or 8k not with 512k).-m
: Mounting point. / by default.pool_name
: Pool name.type
: Pool type. Example : mirror (raid0), raidz3, raidz2 etc…ids
: Disks names included inside the pool (see /dev/disk/by-id directory).
- Add a hot disk spare:
root@host:~# zpool add tank spare ada12
- List pools:
root@host:~# zpool list
- Print commands pool history:
root@host:~# zpool history raid0_01
- Change pool mount point:
root@host:~# zfs set mountpoint=/zfs raid0_01
- If the zfs volume doesn't mount on new system:
root@host:~# zpool export raid0_01
root@host:~# zpool import -a
- Or
root@host:~# zpool import -f raid0_01
- Disable atime:
root@host:~# zfs set atime=off <pool>
- Enable lz4 compression:
root@host:~# zfs set compression=lz4 <pool>
- Enable zstd compression:
root@host:~# zfs set compression=zstd <pool>
- Destroy a storage pool:
⚠️ this command destroys any data containing in the pool ⚠️
root@host:~# zfs destroy <pool>
Maintenance
Files system check (chkdsk equivalent)
- Run a files system check:
root@host:~# zpool scrub raid0_01
- Print datas check status:
root@host:~# zpool status
- Clear the log:
root@host:~# zpool clear raid0_01
Print options
root@host:~# zfs get all <pool>
Replace drive in a RAID :
root@host:~# zpool replace pool1 ata-WDC_WD80EFZX-18ZW3NB_Z3K44HMB ata-ST8000DM002-1ZW212_ZX41X5MS
Monitoring
- Print every pool informations:
root@host:~# zpool get all raid0_01
- Print partition state:
root@host:~# zpool status -v raid0_01
- Print IOs informations:
root@host:~# zpool iostat <second rate> <output number>
root@host:~# zpool iostat 5 10
Encryption
Encryption with dm-crypt
Disks encryption
root@host:~# cryptsetup --cipher aes-xts-plain64 --key-size 512 --hash sha1 --iter-time 1000 --use-urandom -v luksFormat /dev/sdb1
root@host:~# cryptsetup --cipher aes-xts-plain64 --key-size 512 --hash sha1 --iter-time 1000 --use-urandom -v luksFormat /dev/sdc1
root@host:~# cryptsetup --cipher aes-xts-plain64 --key-size 512 --hash sha1 --iter-time 1000 --use-urandom -v luksFormat /dev/sdd1
Open encrypted disks
root@host:~# cryptsetup luksOpen /dev/sdb1 zfs01
root@host:~# cryptsetup luksOpen /dev/sdc1 zfs01
root@host:~# cryptsetup luksOpen /dev/sdd1 zfs01
ZFS pool creation
root@host:~# zpool create -f -m /zfs raid0_01 mirror /dev/mapper/zfs01 /dev/mapper/zfs02 /dev/mapper/zfs03
Native encryption
Creating zfs pool
root@host:~# zpool create -f -m /zfs raid0_01 pci-0000:03:00.0-scsi-0:0:1:0-part1
Create dataset including native encryption
root@host:~# zpool create -o encryption=on -o keyformat=passphrase raid0_01/dataset01
source: linuxfr.org