Archlinux: LXC, LXD & Juju
Archlinux: LXC, LXD & Juju
Archlinux
Milind Nirgun
The default Arch kernel ships with User Namespaces enabled only for root user. Therefore
the following two options are available -
Create /etc/subuid and /etc/subgid with the following line for each user who shall be able to run the
containers.
root:100000:65536
User1:100000:65536
1
lxc.idmap = u 0 100000 65536
lxc.idmap = g 0 100000 65536
Configuring cgroups
The following cgroups are used by lxc containers and require access configured for non-
root users
blkio
cpu, cpuset, cpuacct
devices
freezer
memory
net_cls
perf_event
pids
Rdma
Create a new groupname for giving an user level access to the containers and add the
above cgroups to this groupname.
cgcreate -a user:group -t user:group -g blkio,cpu,...,rdma:groupname
Verify if the above command was successful by checking some of the cgroups -
Creating Containers
hkp://p80.pool.sks-keyservers.net:80
hkp://p80.pool.sks-keyservers.net:80
Starting Containers
2
Run fixcgroups.sh
Add ACL for user’s home and .local directories to give access to the unprivileged container
cd $HOME
setfacl -m u:100000:x . .local
getfacl . .local
To be figured:
1. Change /var/lib path to use a custom mount point
2.
By default, snapd uses /var/lib/snapd and /var/snap for storing all its files. Symbolic links on
other mount points may not work. Bind mount these on a different mount point to save
If the above mounts are not created before setting up snapd then create alternate
3
Once the two snap directories are empty, mount them back on /opt with
$ mount -o bind /opt/snapd /var/lib/snapd
$ mount -o bind /opt/snap /var/snap
Start the snapd service again and verify containers are working again.
Then modify /etc/fstab to add bind entries to make them last after reboots.
#mount bind for snapd
/opt/snapd /var/lib/snapd none bind 0 0
Setup a symbolic link as /snap for ease of use and add the path to your user $PATH.
$ ln -s /var/lib/snapd/snap /snap
PATH=$PATH:/snap/bin; export PATH
The default options for lxd will create a dir type backend storage and network lxdbr0.
Verify everything was created properly with the following commands. Make sure that the
default profile is set to use lxdbr0 as its bridge.
4
Using LXD to create containers
Download a centos 7 image but not start the container. This will create a container named
‘master’
Login to the container as non-root user (Example with default user ubuntu )
LXC Commands
About images
lxc remote list - list all image servers
lxc image list images: -list all available images from an image server
lxc image list images: arm64 - get a filtered list for all arm64
lxc image list images: arm64 ubuntu - further filtering
5
lxc list [--fast]
lxc info <container>
Managing files
lxc file pull <container>/<path> <dest>
lxc file push <source> <container>/<path>
lxc file edit <container>/<path>
Snapshot management
lxc snapshot <container> [<snapshot name>]
lxc info <container>
lxc restore <container> <snapshot name>
lxc move <container>/<snapshot name> <container>/<snapshot name>
lxc copy <source container>/<snapshot name> <destination container>
lxc delete <container>/<snapshot name>
Container configuration
lxc profile list # List available profiles
lxc profile show <profile> # See contents of a given profile
lxc profile edit <profile> # Edit contents of a profile
lxc profile apply <container> <profile1>,<profile2>... # Apply profiles
to a container
Local Configuration (Slxc et unique configurations for each container instead of using
profiles)
lxc config edit <container>
lxc config set <container> <key> <value>
lxc config device add <container> <device> <parameters>
Eg. lxc config device add my-container kvm unix-char path=/dev/kvm
lxc config show [--expanded] <container>
Execution Environment
6
Commands executed through LXD will always run as the container’s root user (uid 0, gid 0)
with a minimal PATH environment variable set and a HOME environment variable set to
/root.
Additional environment variables can be passed through the command line or can be set
permanently against the container through the “environment.<key>” configuration
options.
lxc exec <container> bash # Gives access thru a shell in the container
lxc exec <container> -- ls -lh /
Important directories:
~/.local/share/lxc -- used by native lxc, not by snap-lxd
~/.config/lxc -- not sure, probably used by native lxc
~/.cache/lxc -- downloaded images are stored here, again used by native lxc
/var/lib/lxc -- used by native lxc
/usr/share/lxc
/var/snap and /var/lib/snapd - directories used to store all snaps and related files.
Networking:
Run on host to forward all requests to port 80 to the webserver running on container
Storage:
Create a new storage pool on a block device:
$ lxc storage create btrfs source=/dev/mapper/vg_4tb-lv_lxcshared
$ lxc storage list
+---------+-------------+--------+--------------------------------------------+---------+
| NAME | DESCRIPTION | DRIVER | SOURCE | USED BY |
+---------+-------------+--------+--------------------------------------------+---------+
| brtfs1 | | btrfs | e4edc4fa-b292-40ab-8fa5-3830bfa26bfc | 2 |
+---------+-------------+--------+--------------------------------------------+---------+
| default | | btrfs | /var/snap/lxd/common/lxd/disks/default.img | 4 |
+---------+-------------+--------+--------------------------------------------+---------+
| shared | | btrfs | 3f77ab74-ae96-4603-94c2-7566f90c4d29 | 1 |
+---------+-------------+--------+--------------------------------------------+---------+
7
$ lxc storage volume create <pool name> <volume name>
$ lxc storage volume attach <pool name> <volume name> <container> data <path
to mount>
The same volume can be shared between multiple containers by attaching it to them,
The below example shows how to share the same mount between two containers alp1 and
Useful Links:
Stephane Graber’s articles
https://linuxcontainers.org/lxc/articles/
https://www.linuxjournal.com/content/everything-you-need-know-about-linux-containers-
part-ii-working-linux-containers-lxc
Storage configuration
Troubleshooting:
8
Issue found with unable to create network while starting container. Make sure the veth module is
https://github.com/lxc/lxc/issues/1604
1. Download the LXD profile configuration for GUI from this file:
https://blog.simos.info/wp-content/uploads/2018/06/lxdguiprofile.txt
1. Create an empty profile called ‘gui’ and update it with the configuration downloaded
above
$ lxc profile create gui
$ cat lxdguiprofile.txt | lxc profile edit gui
$ lxc profile list
+----------------------------------+---------+
| NAME | USED BY |
+----------------------------------+---------+
| gui | 0 |
+----------------------------------+---------+
$ lxc profile show gui
1. The gui profile has configuration parameters only for graphics and not networking and
storage. Therefore, while launching, use both the default and the gui profiles in that
order as shown below.
$ lxc launch --profile default --profile gui ubuntu:18.04 gui1804-1
Creating gui1804-1
Starting gui1804-1
9
Using juju on Ubuntu for provisioning containers
juju bootstrap
To deploy a sample multi-tier application with nginx, node apm and mongodb
10