Difference between revisions of "LXD Install Script"
Jump to navigation
Jump to search
Line 172: | Line 172: | ||
install_lxd | install_lxd | ||
test_lxd | test_lxd | ||
+ | ``` | ||
+ | |||
+ | ``` | ||
+ | This LXD server currently has the following storage pools: | ||
+ | Would you like to recover another storage pool? (yes/no) [default=no]: yes | ||
+ | Name of the storage pool: default | ||
+ | Name of the storage backend (cephfs, dir, lvm, zfs, ceph, btrfs): zfs | ||
+ | Source of the storage pool (block device, volume group, dataset, path, ... as applicable): tank/lxd | ||
+ | Additional storage pool configuration property (KEY=VALUE, empty when done): zfs.pool_name=tank/lxd | ||
``` | ``` |
Latest revision as of 20:53, 15 December 2021
Exiting ZFS Data Store called tank/lxd
#!/usr/bin/env bash set -eux import_zfs(){ sudo apt-get update && sudo apt-get install -y zfsutils-linux # ls /dev/disk/by-id/ sudo zpool import <id> sudo zpool status sudo zfs list } install_lxd(){ # sudo apt-get update && sudo apt-get install -y zfsutils-linux nftables wipe sudo apt-get update && sudo apt-get install -y zfsutils-linux nftables # sudo snap install lxd --channel=latest/stable sudo cat <<EOF | lxd init --preseed # Daemon settings config: core.https_address: 0.0.0.0:9999 core.trust_password: sekret images.auto_update_interval: 6 # Storage pools # Importing our pools # Network devices networks: - name: lxdbr0 type: bridge config: ipv4.address: 172.16.0.1/23 ipv6.address: none # Profiles profiles: - name: default config: {} description: "Default profile" devices: eth0: name: eth0 network: lxdbr0 type: nic root: path: / pool: default type: disk - name: test-profile description: "Test profile" config: limits.memory: 2GB devices: test0: name: test0 nictype: bridged parent: lxd-my-bridge type: nic EOF sudo lxd recover # <answer yes on all you want to recover> } list_nft(){ sudo nft list tables sudo nft list ruleset } test_lxd(){ sudo lxc launch ubuntu:20.04 u1 sudo lxc list u1 sleep 10 sudo lxc exec u1 -- host google.com } # destroy install_lxd # test_lxd
New zfs data store
#!/usr/bin/env bash set -eux destroy(){ # sudo snap remove lxd | true sudo zpool destroy tank | true } install_lxd(){ # sudo apt-get update && sudo apt-get install -y zfsutils-linux nftables wipe sudo apt-get update && sudo apt-get install -y zfsutils-linux nftables sudo snap install lxd --channel=latest/stable # diskpath=/dev/sdc # disk_uuid=$(lsblk --ascii -no UUID /dev/sdc) # sleep 10 # sudo zpool create -f tank /dev/disk/by-partuuid/$disk_uuid # sudo zpool create -f tank /dev/disk/by-uuid/$disk_uuid tanK_disk=sdc sudo zpool create -f tank $tank_disk sudo zfs create -o mountpoint=none tank/lxd sudo zpool status sudo zfs list sleep 10 # du -sh $diskpath sudo cat <<EOF | lxd init --preseed # Daemon settings config: core.https_address: 0.0.0.0:9999 core.trust_password: sekret images.auto_update_interval: 6 # Storage pools storage_pools: - name: default driver: zfs config: source: tank/lxd # Network devices networks: - name: lxdbr0 type: bridge config: ipv4.address: auto ipv6.address: none # Profiles profiles: - name: default config: {} description: "Default profile" devices: eth0: name: eth0 network: lxdbr0 type: nic root: path: / pool: default type: disk - name: test-profile description: "Test profile" config: limits.memory: 2GB devices: test0: name: test0 nictype: bridged parent: lxd-my-bridge type: nic EOF } test_lxd(){ sudo lxc launch ubuntu:20.04 u1 sudo lxc list u1 sleep 10 sudo lxc exec u1 -- host google.com } destroy install_lxd test_lxd
This LXD server currently has the following storage pools: Would you like to recover another storage pool? (yes/no) [default=no]: yes Name of the storage pool: default Name of the storage backend (cephfs, dir, lvm, zfs, ceph, btrfs): zfs Source of the storage pool (block device, volume group, dataset, path, ... as applicable): tank/lxd Additional storage pool configuration property (KEY=VALUE, empty when done): zfs.pool_name=tank/lxd