ZFS
Revision as of 22:52, 22 April 2019 by imported>Jeremy-busk
ZFS on Ubuntu 18.04
sudo apt-get install zfsutils-linux
vim /etc/default/zfs
ZFS_INITRD_POST_MODPROBE_SLEEP='15' # increase longer/shorter if needed 5-15
sudo update-initramfs -u sudo update-grub
Setup zpool with cache/logs from vdevs
zpool create -f tank mirror sdd sde mirror sdf sdg mirror sdh sdi mirror sdj sdk sudo zpool add tank mirror nvme2n1 nvme3n1 sudo zpool add tank log mirror nvme0n1 nvme1n1 sudo zpool add tank cache nvme2n1 nvme3n1
sys prep to have default docker and lxd on zfs datasets
sudo systemctl stop lxd lxd.socket sudo rm -Rf /var/lib/lxd sudo zfs create tank/lxd sudo zfs create tank/lxd sudo zfs set mountpoint=/var/lib/lxd tank/lxd sudo zfs create tank/libvirt sudo zfs set mountpoint=/var/lib/libvirt tank/libvirt sudo zfs create tank/docker sudo zfs set mountpoint=/var/lib/docker tank/docker sudo zfs mount -a sudo apt-get install docker-ce # after you setup from get docker site
install qemu-kvm libvirt
sudo apt-get install libguestfs-tools qemu-kvm libvirt-clients libvirt-daemon-system bridge-utils virt-manager lintian curl wget git
Other
sudo zpool import tank sudo zfs mount -a sudo apt-get -y install docker-ce
ZFS Trouble Shooting
nvme smart-log /dev/nvme0n1 nvme smart-log /dev/nvme3n1
both reported no errors
https://docs.huihoo.com/opensolaris/solaris-zfs-administration-guide/html/ch09s06.html
zpool clear ... didn't do anything
zpool status
errors: No known data errors
  pool: tank
 state: DEGRADED
status: One or more devices could not be used because the label is missing or
    invalid.  Sufficient replicas exist for the pool to continue
    functioning in a degraded state.
action: Replace the device using 'zpool replace'.
   see: http://zfsonlinux.org/msg/ZFS-8000-4J
  scan: none requested
config:
    NAME                     STATE     READ WRITE CKSUM
    tank                     DEGRADED     0     0     0
      mirror-0               ONLINE       0     0     0
        sdc                  ONLINE       0     0     0
        sdd                  ONLINE       0     0     0
      mirror-1               ONLINE       0     0     0
        sde                  ONLINE       0     0     0
        sdf                  ONLINE       0     0     0
      mirror-2               ONLINE       0     0     0
        sdg                  ONLINE       0     0     0
        sdh                  ONLINE       0     0     0
      mirror-3               ONLINE       0     0     0
        sdi                  ONLINE       0     0     0
        sdj                  ONLINE       0     0     0
    logs
      mirror-4               DEGRADED     0     0     0
        4221213393078321817  FAULTED      0     0     0  was /dev/nvme3n1
        nvme1n1              ONLINE       0     0     0
    cache
      nvme2n1                ONLINE       0     0     0
      nvme0n1                FAULTED      0     0     0  corrupted data
Use with caution. This example would be mirror
You could remove disk from cache and use that for mirror
zpool remove tank cache nvme0n1
replace drive using singular(not two dev names) command
zpool replace tank nvme3n1
auto used free nvme0n1 from pool if none available it may just replace itself
Failed Drives
ZFS Intent Log & ARC/L2Arch
- https://pthree.org/2013/04/19/zfs-administration-appendix-a-visualizing-the-zfs-intent-log/
 - https://www.ixsystems.com/blog/o-slog-not-slog-best-configure-zfs-intent-log/
 - https://www.ixsystems.com/blog/zfs-zil-and-slog-demystified/
 - https://www.45drives.com/wiki/index.php?title=FreeNAS_-What_is_ZIL%26_L2ARC
 - https://en.wikipedia.org/wiki/Adaptive_replacement_cache