Difference between revisions of "Microceph"
Jump to navigation
Jump to search
Line 21: | Line 21: | ||
# Add new node 4 manually | # Add new node 4 manually | ||
+ | |||
+ | ## Get token from existing cluster member | ||
+ | ``` | ||
+ | microceph cluster add microceph4 | ||
+ | ``` | ||
+ | |||
+ | ## Add new sparse image for block disk for microceph4 --vm | ||
``` | ``` | ||
sudo mkdir -p /tank/microceph | sudo mkdir -p /tank/microceph | ||
Line 36: | Line 43: | ||
``` | ``` | ||
− | # | + | |
+ | |||
+ | |||
+ | # Status Example | ||
``` | ``` | ||
− | microceph cluster | + | root@microceph4:~# sudo microceph.ceph status |
+ | cluster: | ||
+ | id: ee26b25c-f1d9-45e5-a653-be0f14d9cb33 | ||
+ | health: HEALTH_OK | ||
+ | |||
+ | services: | ||
+ | mon: 3 daemons, quorum microceph1,microceph2,microceph3 (age 6h) | ||
+ | mgr: microceph1(active, since 5d), standbys: microceph3, microceph2 | ||
+ | osd: 4 osds: 4 up (since 94s), 4 in (since 97s) | ||
+ | |||
+ | data: | ||
+ | pools: 2 pools, 33 pgs | ||
+ | objects: 7 objects, 577 KiB | ||
+ | usage: 1.7 GiB used, 3.9 TiB / 3.9 TiB avail | ||
+ | pgs: 33 active+clean | ||
+ | |||
+ | root@microceph4:~# sudo microceph.ceph status | ||
+ | cluster: | ||
+ | id: ee26b25c-f1d9-45e5-a653-be0f14d9cb33 | ||
+ | health: HEALTH_OK | ||
+ | |||
+ | services: | ||
+ | mon: 3 daemons, quorum microceph1,microceph2,microceph3 (age 7h) | ||
+ | mgr: microceph1(active, since 5d), standbys: microceph3, microceph2 | ||
+ | osd: 4 osds: 4 up (since 4m), 4 in (since 4m) | ||
+ | |||
+ | data: | ||
+ | pools: 2 pools, 33 pgs | ||
+ | objects: 7 objects, 577 KiB | ||
+ | usage: 1.7 GiB used, 3.9 TiB / 3.9 TiB avail | ||
+ | pgs: 33 active+clean | ||
``` | ``` |
Revision as of 17:10, 26 November 2023
Bootstrapping Microceph on LXD
https://github.com/canonical/microceph
https://microk8s.io/docs/how-to-ceph
https://canonical-microceph.readthedocs-hosted.com/en/latest/tutorial/multi-node/
bootstrap
sudo microceph cluster bootstrap sudo microceph.ceph status sudo microceph disk list sudo microceph disk add --wipe /dev/disk/by-id/scsi-SQEMU_QEMU_HARDDISK_lxd_osd1 microceph cluster add microceph2 lxc shell microceph2 microceph cluster join <output from previous command> sudo microceph disk add --wipe /dev/disk/by-id/scsi-SQEMU_QEMU_HARDDISK_lxd_osd1
Add new node 4 manually
Get token from existing cluster member
microceph cluster add microceph4
Add new sparse image for block disk for microceph4 --vm
sudo mkdir -p /tank/microceph sudo truncate -s 1000G /tank/microceph/microceph4.osd1.img lxc init ubuntu:22.04 microceph4 --vm -c limits.cpu=16 -c limits.memory=32GB lxc config device override microceph4 root size=64GB lxc config device add microceph4 osd1 disk source=/tank/microceph/microceph4.osd1.img lxc start microceph4 lxc shell microceph4 sudo snap install microceph --channel=quincy/stable sudo microceph cluster join <your token eyJuYW1lIjoib...==> microceph disk list sudo microceph disk add --wipe "/dev/disk/by-id/scsi-SQEMU_QEMU_HARDDISK_lxd_osd1" microceph disk list
Status Example
root@microceph4:~# sudo microceph.ceph status cluster: id: ee26b25c-f1d9-45e5-a653-be0f14d9cb33 health: HEALTH_OK services: mon: 3 daemons, quorum microceph1,microceph2,microceph3 (age 6h) mgr: microceph1(active, since 5d), standbys: microceph3, microceph2 osd: 4 osds: 4 up (since 94s), 4 in (since 97s) data: pools: 2 pools, 33 pgs objects: 7 objects, 577 KiB usage: 1.7 GiB used, 3.9 TiB / 3.9 TiB avail pgs: 33 active+clean root@microceph4:~# sudo microceph.ceph status cluster: id: ee26b25c-f1d9-45e5-a653-be0f14d9cb33 health: HEALTH_OK services: mon: 3 daemons, quorum microceph1,microceph2,microceph3 (age 7h) mgr: microceph1(active, since 5d), standbys: microceph3, microceph2 osd: 4 osds: 4 up (since 4m), 4 in (since 4m) data: pools: 2 pools, 33 pgs objects: 7 objects, 577 KiB usage: 1.7 GiB used, 3.9 TiB / 3.9 TiB avail pgs: 33 active+clean