Difference between revisions of "LXD"

From UVOO Tech Wiki
Jump to navigation Jump to search
(update loopback to ip)
Line 18: Line 18:
 
lxc launch ubuntu:18.04 mycontainername
 
lxc launch ubuntu:18.04 mycontainername
 
sudo ufw allow 80
 
sudo ufw allow 80
lxc config device add mycontainername tcp-80-4000 proxy listen=tcp:0.0.0.0:80 connect=tcp:localhost:4000
+
lxc config device add mycontainername tcp-80-4000 proxy listen=tcp:0.0.0.0:80 connect=tcp:127.0.0.1:4000
 
lxc config device show mycontainername
 
lxc config device show mycontainername
 
lxc config device remove mycontainername tcp-80-4000
 
lxc config device remove mycontainername tcp-80-4000
Line 154: Line 154:
 
Now expose a management(ssh) and service port (https)
 
Now expose a management(ssh) and service port (https)
 
```
 
```
lxc config device add h1 h1-10001-22 proxy listen=tcp:0.0.0.0:10001 connect=tcp:localhost:ssh
+
lxc config device add h1 h1-10001-22 proxy listen=tcp:0.0.0.0:10001 connect=tcp:127.0.0.1:ssh
lxc config device add h1 h1-10002-443 proxy listen=tcp:0.0.0.0:10002 connect=tcp:localhost:443
+
lxc config device add h1 h1-10002-443 proxy listen=tcp:0.0.0.0:10002 connect=tcp:127.0.0.1:443
 
```
 
```
  

Revision as of 00:01, 13 September 2020

Menu

LXD Cheat Sheet

LXD-VM

LXD From Scratch

Limits

Quick Commands

Assuming your user belongs to the lxd group or use sudo. Also assuming you have a service running on 4000 on container

lxc launch ubuntu:18.04 mycontainername
sudo ufw allow 80
lxc config device add mycontainername tcp-80-4000 proxy listen=tcp:0.0.0.0:80 connect=tcp:127.0.0.1:4000
lxc config device show mycontainername
lxc config device remove mycontainername tcp-80-4000
sudo ss -lnt | grep 80
sudo tcpdump -npi eth0 port 80
lxc snapshot mycontainername mysnapshotname
lxc restore mycontainername mysnapshotname
lxc info mycontainername
lxc delete mycontainername/mysnapshotname
for i in $(lxc list | grep con1- | awk -F"|" '{print $2}'); do lxc delete -f $i; done

Common commands

lxc config set n1 boot.autostart true
xc launch --profile portal1 ubuntu:18.04 jtest4

lxd.migrate

To migrate your existing client configuration, move ~/.config/lxc to ~/snap/lxd/current/.config/lxc

https://linuxcontainers.org/lxd/getting-started-cli/

lxc export jtest3
lxc delete -f jtest3
lxc import jtest3 backup.tar.gz -s zfs
lxc start jtest3

lxd sql global "SELECT * FROM storage_pools_config;"

LXD - Linux Containers (LXC) extension

More Clustering Info

LXD Cluster

Commands for Wrapper

api doesn't support same name move in 3.0.3 (it does in 3.3 and later) - criu for live is not stable too

lxc stop move3a; lxc move move3a move3a-tmp; lxc move move3a-tmp h-lxd2:move3a; lxc start move3a

Quick Notes

lxc launch ubuntu:18.04 h4 -p h4 -c boot.autostart=true

Installing on Ubuntu

apt-get update; apt-get install zfsutils-linux lxd
lxd init - probably answer y to everything by hitting enter

or the Snap way for latest stable

sudo apt-get update
sudo apt-get -y dist-upgrade
# sudo apt-get remove -y lxd
sudo apt remove --purge -y lxd lxd-client
sudo apt install -y zfsutils-linux bridge-utils
# sudo apt install salt-minion
adduser justin --gecos "" --disabled-password
adduser justin lxd
adduser justin sudo
sudo snap install lxd
sudo lxd init
sudo snap start lxd
# /var/snap/lxd/common/lxd/storage-pools
#
# use 'zfs destroy -r default' to destroy all datasets in the pool
# use 'zpool destroy default' to destroy the pool itself


config print below
"
config: {}
networks:
- config:
    ipv4.address: auto
    ipv6.address: auto
  description: ""
  managed: false
  name: lxdbr0
  type: ""
storage_pools:
- config:
    size: 300GB
  description: ""
  name: default
  driver: zfs
profiles:
- config: {}
  description: ""
  devices:
    eth0:
      name: eth0
      nictype: bridged
      parent: lxdbr0
      type: nic
    root:
      path: /
      pool: default
      type: disk
  name: default
cluster: null
"

Networks and expose ports via Proxy

You should use one bridge per pod or group of containers this could only be one.

This would applying to existing or you could use profile when you launch

lxc network create h1
lxc profile copy default h1
lxc profile edit h1 (change parent)
lxc profile apply h1 h1
lxc restart h1

You would want to do this seamless before hand with script wrapper.

lxc profile set” and “lxc profile device add”.

Expose service ports to public ip

Now expose a management(ssh) and service port (https)

lxc config device add h1 h1-10001-22 proxy listen=tcp:0.0.0.0:10001 connect=tcp:127.0.0.1:ssh
lxc config device add h1 h1-10002-443 proxy listen=tcp:0.0.0.0:10002 connect=tcp:127.0.0.1:443

show

lxc config device show h1

remove

lxc config device remove h1 h1-10001-22
lxc config device remove h1 h1-10002-443

Notice how listener (ss -lnt) transfers proxy binding rules to remote LXD host when you move or copy container as it is an attached device.

Remotes

lxc remote add h-lxd3

lxc copy h5/snap0 h-lxd3:h5 --verbose

Resources

Handy One-liners

for i in $(lxc ls | awk '{print $2}'); do lxc config set $i boot.autostart 0; done

for i in $(lxc ls | grep RUNNING | awk '{print $2}'); do lxc stop $i; done

for i in $(lxc ls | grep "^| ${string-match}" | awk '{print $2}'); do echo $i; sleep 2; lxc delete $i; done

cd /sys/fs/cgroup/memory/lxc && for i in $(echo */); do echo $i && cat $i/cgroup.procs; done




More on Getting Started with LXD. Note: This is matter not organized.

Linux Containers and Docker are the two main containerization software. There are many times where Docker isn't the best option for containers.
If you run Linux on your home machine or server you definitely want to check this out.

Virtual Machine vs Virtual Environment - https://www.upguard.com/articles/docker-vs-lxc - pretty good article on containers - containers are lighter and faster than full virtualization.

Many times a developer wants a "clean" host for testing the build and install of their code without residual material from previous installs. CI gives you a "clean" build environment but sometimes it's nice to just work through it in a container in the terminal. Linux containers using LXD is a great way to get started. LXD is much easier for most people to work with than Docker. Docker forces you to do things a certain way that is confusing for most people to work with, especially when working with persistent data.

Guides
Spend 15 minutes doing this walk-through on LXD and you'll see how easy it is to get started.

https://linuxcontainers.org/lxd/getting-started-cli/

Another good doc https://blog.ubuntu.com/2016/03/22/lxd-2-0-your-first-lxd-container

Here is the brief list of commands to get going on Ubuntu

apt install lxd
lxd init
Just hit enter to accept defaults. The defaults should be good enough for you. You can always change in the future.
lxc launch ubuntu:xenial mynewcontainer
lxc exec mynewcontainer /bin/bash  (exit to leave container)
lxc list 
lxc stop mynewcontainer
sudo lxc delete --force <some-container-i-want-destroyed>
sudo lxc launch -p myprofile images:debian/9 debian9-container
lxc snapshot <container> <snapshot name>
lxc restore <container> <snapshot name>
In Ubuntu and if you are using dir for filesystem you can checkout files in /var/lib/lxd/containers/mynewcontainer/rootfs

If you want to use zfs for your LXD filesystem run apt install zfsutils-linux

You can view/edit at files (if not using zfs) on LXD host machine sudo ls /var/lib/lxd/containers/<container-name>/rootfs/var/www/html - You could edit files and then lxc restart <container-name> or restart the service in the container via lxc exec <container-name> -c bash "/tmp/myscript.sh".



#!/usr/bin/env sh
# Create LXD profile for host lan interface so you can get dhcp and expose a container on local interface without doing DNAT.
# This can create initial access issues if you are trying to access via network from your local network box. Use lxc exec for this.
lan_interface=$(ip route show default 0.0.0.0/0 | awk '{print $5}')
lxc_profile_name="lanprofile"
lxc profile copy default ${lxc_profile_name}
lxc profile device set ${lxc_profile_name} eth0 nictype macvlan
lxc profile device set ${lxc_profile_name} eth0 parent ${lan_interface}
# lxc profile apply <container-name> ${lxc_profile_name}




Another great walkthrough

https://help.ubuntu.com/lts/serverguide/lxd.html



Reference Documentation (see root doc for more)

https://github.com/lxc/lxd/blob/master/doc/containers.md

https://github.com/lxc/lxd/tree/master/doc/ (all docs if you want more)



If you prefer pure LXC  https://help.ubuntu.com/lts/serverguide/lxc.html. If you're a Debian purist you probably will use this LXC see - https://wiki.debian.org/LXC

Great simple guide in google play book if you need it for LXC and LXD but he above stuff does a pretty good job.

https://play.google.com/store/books/details/Senthil_Kumaran_S_Practical_LXC_and_LXD?id=xaMzDwAAQBAJ

Ask us if you have any questions or need help. Using profiles can be little confusing at first or working with VLANs.



More


lxc stop c1
lxc network attach lxdbr0 c1 eth0 eth0
lxc config device set c1 eth0 ipv4.address 10.99.10.42
lxc start c1



or older

vim /etc/default/lxd-static-ip.conf



dhcp-host=nginx,10.210.7.10
dhcp-host=host1-containername,10.210.7.11
dhcp-host=host2-coontainername,10.210.7.12

service lxd-bridge restart

restart container

lxc restart containername

# Perf

time lxd-benchmark launch --count 384 --parallel 24

<br /># Static ips

LXD container keeps same lease until it is deleted. You can however set it via lxd or in container.

lxc launch ubuntu:18.04 ntest lxc list ntest lxc stop ntest lxc network attach lxdbr0 ntest eth0 eth0 lxc config device set ntest eth0 ipv4.address 10.70.195.227 lxc start ntest lxc list ntest

<br /># Network Isolation

iptables -A FORWARD -i bridge1 -o bridge2 -j REJECT iptables -A FORWARD -i bridge2 -o bridge1 -j REJECT

<br /># GRE Tunnel for share layer-2 between cluster nodes

https://linuxacademy.com/blog/linux/multiple-lxd-hosts-can-share-a-discreet-layer-2-container-only-network/

# VXLAN and other multi-node bridging without switches

* https://discuss.linuxcontainers.org/t/networking-between-two-lxd-servers/289/3
* https://stgraber.org/2016/10/27/network-management-with-lxd-2-3/

### Files of Interest

cat /sys/fs/cgroup/memory/lxc/cgroup.procs

Backup Method(s)

The “lxc image import” failing is because you’re attempting to import the generated image on the same server that produced it. You should just remove the image from the image store (with “lxc image delete”) after you’ve exported it as a tarball. That’ll solve that.

To clone a container, you’d just do “lxc copy SOURCE DESTINATION”, but that’s on a single local LXD. In your case, it looks like you’re trying to test your backup mechanism.

Say you have a container called “blah”. For backup as an image tarball, you’d do:

lxc snapshot blah backup
lxc publish blah/backup --alias blah-backup
lxc image export blah-backup .
lxc image delete blah-backup
Which will get you a tarball in your current directory.

To restore and create a container from it, you can then do:

lxc image import TARBALL-NAME --alias blah-backup
lxc launch blah-backup some-container-name
lxc image delete blah-backup
This is still pretty indirect and abusing the image mechanism to use it as a backup mechanism though :slight_smile: One alternative you could use is to just generate a tarball of /var/lib/lxd/containers/NAME and dump that on your NAS.

Restoring that is a bit harder though. You’ll need to create a /var/lib/lxd/storage-pools/POOL-NAME/containers/NAME path matching the name of the backed up container. Then if the storage pool is zfs or btrfs or lvm, you’ll need to create the applicable dataset, subvolume or lv and mount it on /var/lib/lxd/storage-pools/POOL-NAME/containers/NAME and then unpack your backup tarball onto it. Lastly, you can call “lxd import NAME” to have LXD re-import the container in the database.

I think we can do something quite a bit simpler by directly allowing the export/import of containers as tarballs but it may take a while until we get to that: https://github.com/lxc/lxd/issues/3730 233

ref: https://discuss.linuxcontainers.org/t/backup-the-container-and-install-it-on-another-server/463/4

Remote LXD

lxc config set core.trust_password <some password for api trust>
lxc config trust list
lxc config trust remove <fingerpring>

Windows Client

https://ci.appveyor.com/project/lxc/lxd/branch/master/artifacts
download and extract lxc.exe to c:/bin or place within you exe path
mkdir .config/lxc
lxc remote add lxd1 10.x.x.13
lxc remote switch lxd1
lxc list