Difference between revisions of "Microceph"

From UVOO Tech Wiki
Jump to navigation Jump to search
Line 36: Line 36:
 
sudo mkdir -p /tank/microceph
 
sudo mkdir -p /tank/microceph
 
sudo truncate -s 1000G /tank/microceph/microceph4.osd1.img
 
sudo truncate -s 1000G /tank/microceph/microceph4.osd1.img
lxc init ubuntu:22.04 microceph4 --vm -c limits.cpu=16 -c limits.memory=32GB
+
lxc init ubuntu:22.04 microceph4 --vm -c limits.cpu=16 -c limits.memory=32GB --target lxd1
 
lxc config device override microceph4 root size=64GB
 
lxc config device override microceph4 root size=64GB
 
lxc config device add microceph4 osd1 disk source=/tank/microceph/microceph4.osd1.img
 
lxc config device add microceph4 osd1 disk source=/tank/microceph/microceph4.osd1.img

Revision as of 23:06, 23 March 2024

Bootstrapping Microceph on LXD

https://github.com/canonical/microceph

https://microk8s.io/docs/how-to-ceph

https://canonical-microceph.readthedocs-hosted.com/en/latest/tutorial/multi-node/

bootstrap

sudo microceph cluster bootstrap
sudo microceph.ceph status
sudo microceph disk list
sudo microceph disk add --wipe /dev/disk/by-id/scsi-SQEMU_QEMU_HARDDISK_lxd_osd1

microceph cluster add microceph2
lxc shell microceph2
microceph cluster join <output from previous command>
sudo microceph disk add --wipe /dev/disk/by-id/scsi-SQEMU_QEMU_HARDDISK_lxd_osd1

Add new node 4 manually

Get token from existing cluster member

microceph cluster add microceph4

ZFS

 sudo zfs create -V 1000G tank/microceph5-osd1
lxc config device add microceph5 osd1 disk source=/dev/zvol/tank/microceph5-osd1

Add new sparse image for block disk for microceph4 --vm

sudo mkdir -p /tank/microceph
sudo truncate -s 1000G /tank/microceph/microceph4.osd1.img
lxc init ubuntu:22.04 microceph4 --vm -c limits.cpu=16 -c limits.memory=32GB --target lxd1
lxc config device override microceph4 root size=64GB
lxc config device add microceph4 osd1 disk source=/tank/microceph/microceph4.osd1.img
lxc start microceph4
lxc shell microceph4
sudo snap install microceph --channel=quincy/stable
sudo microceph cluster join <your token eyJuYW1lIjoib...==>
microceph disk list
sudo microceph disk add --wipe "/dev/disk/by-id/scsi-SQEMU_QEMU_HARDDISK_lxd_osd1"
microceph disk list
sudo microceph.ceph status

Status Example

root@microceph4:~# sudo microceph.ceph status
  cluster:
    id:     ee26b25c-f1d9-45e5-a653-be0f14d9cb33
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum microceph1,microceph2,microceph3 (age 6h)
    mgr: microceph1(active, since 5d), standbys: microceph3, microceph2
    osd: 4 osds: 4 up (since 94s), 4 in (since 97s)

  data:
    pools:   2 pools, 33 pgs
    objects: 7 objects, 577 KiB
    usage:   1.7 GiB used, 3.9 TiB / 3.9 TiB avail
    pgs:     33 active+clean

root@microceph4:~# sudo microceph.ceph status
  cluster:
    id:     ee26b25c-f1d9-45e5-a653-be0f14d9cb33
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum microceph1,microceph2,microceph3 (age 7h)
    mgr: microceph1(active, since 5d), standbys: microceph3, microceph2
    osd: 4 osds: 4 up (since 4m), 4 in (since 4m)

  data:
    pools:   2 pools, 33 pgs
    objects: 7 objects, 577 KiB
    usage:   1.7 GiB used, 3.9 TiB / 3.9 TiB avail
    pgs:     33 active+clean

If using zfs let's look at resources

$ sudo zfs list | grep microceph4
default/virtual-machines/microceph4                                                    6.95M  93.1M     6.96M  legacy
default/virtual-machines/microceph4.block

Removing OSDs

https://docs.ceph.com/en/latest/rados/operations/add-or-rm-osds/

Mark host as out so it can rebalance data on OSDs

sudo microceph.ceph osd status
sudo microceph.ceph osd out 2

Destroy if needed or it is is in a failed state

sudo microceph.ceph osd safe-to-destroy 2
sudo microceph.ceph osd destroy 2 --yes-i-really-mean-it