0% found this document useful (0 votes)
150 views

Archlinux: LXC, LXD & Juju

This document provides instructions for installing and configuring LXC containers on Arch Linux. It discusses enabling unprivileged containers, configuring cgroups for user access, creating and managing containers using both native LXC tools and LXD. Networking and storage configuration is also covered, along with important LXC commands and directories. Alternate installation using Snap is outlined.

Uploaded by

Milind
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
150 views

Archlinux: LXC, LXD & Juju

This document provides instructions for installing and configuring LXC containers on Arch Linux. It discusses enabling unprivileged containers, configuring cgroups for user access, creating and managing containers using both native LXC tools and LXD. Networking and storage configuration is also covered, along with important LXC commands and directories. Alternate installation using Snap is outlined.

Uploaded by

Milind
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 10

LXC, LXD & JUJU

Archlinux
Milind Nirgun

Installation of lxc using Arch linux tools

Install lxc and arch-install-scripts

Enable support for unprivileged containers


A kernel supporting User Namespaces is required - linux v4.14.5 or later

The default Arch kernel ships with User Namespaces enabled only for root user. Therefore
the following two options are available -

● With linux v4.14.5 or later, start unprivileged containers only as root.


● With linux v4.14.5 or later, enable the sysctl setting
kernel.unprivileged_userns_clone to allow normal users to run unprivileged
containers.
○ For enabling in the current session, run the command:
○ sysctl kernel.unprivileged_userns_clone=1
○ For making it permanent - sysctl.d(5)

Modify /etc/lxc/default.conf to add the following lines:


lxc.idmap = u 0 100000 65536
lxc.idmap = g 0 100000 65536

Create /etc/subuid and /etc/subgid with the following line for each user who shall be able to run the
containers.
root:100000:65536
User1:100000:65536

Configure user to run lxc


Copy /etc/lxc/default.conf to the user’s $HOME/.config/lxc directory. And set the

following values in this copy -


lxc.net.0.type = veth
lxc.net.0.link = lxcbr0
lxc.net.0.flags = up
lxc.net.0.hwaddr = 00:16:3e:xx:xx:xx

1
lxc.idmap = u 0 100000 65536
lxc.idmap = g 0 100000 65536

Configuring cgroups

The following cgroups are used by lxc containers and require access configured for non-

root users
blkio
cpu, cpuset, cpuacct
devices
freezer
memory
net_cls
perf_event
pids
Rdma

Use libcgroupAUR on Arch Linux to configure and manage cgroups.

Create a new groupname for giving an user level access to the containers and add the
above cgroups to this groupname.
cgcreate -a user:group -t user:group -g blkio,cpu,...,rdma:groupname

Verify if the above command was successful by checking some of the cgroups -

ls -l /sys/fs/cgroup/memory which should show a new directory with the name


groupname and owned by the user and group specified in the cgcreate command above.
This directory should have the relevant cgroup files under it owned by this user.

Enable cgconfig service with: sudo systemctl enable cgconfig.service

Start the cgconfig service with: sudo systemctl start cgconfig.service

Creating Containers

lxc-create -n master -t download -- --rootfs=/home/milind/lxc_containers/ --keyserver

hkp://p80.pool.sks-keyservers.net:80

lxc-create -n minion1 -t download -- --rootfs=/home/milind/lxc_containers/ --keyserver

hkp://p80.pool.sks-keyservers.net:80

Starting Containers

2
Run fixcgroups.sh

Add ACL for user’s home and .local directories to give access to the unprivileged container

user (mostly 100000).

cd $HOME
setfacl -m u:100000:x . .local
getfacl . .local

Alternate installation with snap

To be figured:
1. Change /var/lib path to use a custom mount point
2.

Install snapd from AUR.

By default, snapd uses /var/lib/snapd and /var/snap for storing all its files. Symbolic links on

other mount points may not work. Bind mount these on a different mount point to save

space on /var. Follow steps below.

If the above mounts are not created before setting up snapd then create alternate

directories under /opt. Stop snapd service


$ sudo systemctl stop snapd
$ sudo systemctl stop snapd.socket
$ mkdir /opt/snap /opt/snapd

Move everything from the respective /var directories such as


$ mv /var/snap/* /opt/snap/
$ mv /var/lib/snapd/* /opt/snapd

3
Once the two snap directories are empty, mount them back on /opt with
$ mount -o bind /opt/snapd /var/lib/snapd
$ mount -o bind /opt/snap /var/snap

Start the snapd service again and verify containers are working again.

Then modify /etc/fstab to add bind entries to make them last after reboots.
#mount bind for snapd
/opt/snapd /var/lib/snapd none bind 0 0

#mount bind for lxd


/opt/snap /var/snap none bind 0 0

Proceed with next steps to continue.

$ sudo ln -s /var/lib/snapd/snap /snap

See all steps for setting up lxd at the url - https://docs.conjure-up.io/2.4.0/en/

Message from Kubernetes Core in conjure-up


$ sudo snap install lxd
$ sudo usermod -a -G lxd <youruser>
$ newgrp lxd # changes user’s group to lxd allowing them to run
commands
$ /snap/bin/lxd init

Setup a symbolic link as /snap for ease of use and add the path to your user $PATH.
$ ln -s /var/lib/snapd/snap /snap
PATH=$PATH:/snap/bin; export PATH

The default options for lxd will create a dir type backend storage and network lxdbr0.
Verify everything was created properly with the following commands. Make sure that the
default profile is set to use lxdbr0 as its bridge.

$ lxc storage list


$ lxc network show
$ lxc profile show default

4
Using LXD to create containers

Download a centos 7 image but not start the container. This will create a container named

‘master’

$ lxc init images:centos/7/amd64 master

$ lxc init images:alphine/3.8/amd64 <name>

Creating a container with a specific storage pool other than default:

$ lxc init images:alphine/3.8/amd64 <name> -s <storage pool name>

Start the container with:

$ lxc start master

Get details about the container:

$ lxc info master

Login to the container as non-root user (Example with default user ubuntu )

$ lxc exec master -- sudo --login --user ubuntu

LXC Commands

About images
lxc remote list - list all image servers
lxc image list images: -list all available images from an image server
lxc image list images: arm64 - get a filtered list for all arm64
lxc image list images: arm64 ubuntu - further filtering

Creating and running containers


lxc launch ubuntu:16.04
lxc launch ubuntu:16.04 u1
lxc init images:centos/7/amd64

Getting information about containers

5
lxc list [--fast]
lxc info <container>

Container lifecycle management


lxc start <container>
lxc stop <container>
lxc restart <container> [--force]
lxc pause <container>
lxc delete <container>

Managing files
lxc file pull <container>/<path> <dest>
lxc file push <source> <container>/<path>
lxc file edit <container>/<path>

Snapshot management
lxc snapshot <container> [<snapshot name>]
lxc info <container>
lxc restore <container> <snapshot name>
lxc move <container>/<snapshot name> <container>/<snapshot name>
lxc copy <source container>/<snapshot name> <destination container>
lxc delete <container>/<snapshot name>

Cloning and renaming


lxc copy <source container> <destination container>
lxc move <old name> <new name> # container must be stopped

Container configuration
lxc profile list # List available profiles
lxc profile show <profile> # See contents of a given profile
lxc profile edit <profile> # Edit contents of a profile
lxc profile apply <container> <profile1>,<profile2>... # Apply profiles
to a container

Local Configuration (Slxc et unique configurations for each container instead of using

profiles)
lxc config edit <container>
lxc config set <container> <key> <value>
lxc config device add <container> <device> <parameters>
Eg. lxc config device add my-container kvm unix-char path=/dev/kvm
lxc config show [--expanded] <container>

Execution Environment

6
Commands executed through LXD will always run as the container’s root user (uid 0, gid 0)
with a minimal PATH environment variable set and a HOME environment variable set to
/root.

Additional environment variables can be passed through the command line or can be set
permanently against the container through the “environment.<key>” configuration
options.
lxc exec <container> bash # Gives access thru a shell in the container
lxc exec <container> -- ls -lh /

lxc console <container> # Gives a login prompt for the container

Important directories:
~/.local/share/lxc -- used by native lxc, not by snap-lxd
~/.config/lxc -- not sure, probably used by native lxc
~/.cache/lxc -- downloaded images are stored here, again used by native lxc
/var/lib/lxc -- used by native lxc
/usr/share/lxc

/var/snap/lxd/common/lxd/storage-pools - all runtime container images are stored here for


each profile. This should be mounted on a mount point with enough space.

/var/snap and /var/lib/snapd - directories used to store all snaps and related files.

Networking:
Run on host to forward all requests to port 80 to the webserver running on container

sudo iptables -t nat -A PREROUTING -p tcp -i eth0 --dport 80 -j DNAT


--to-destination :80

Storage:
Create a new storage pool on a block device:
$ lxc storage create btrfs source=/dev/mapper/vg_4tb-lv_lxcshared
$ lxc storage list
+---------+-------------+--------+--------------------------------------------+---------+
| NAME | DESCRIPTION | DRIVER | SOURCE | USED BY |
+---------+-------------+--------+--------------------------------------------+---------+
| brtfs1 | | btrfs | e4edc4fa-b292-40ab-8fa5-3830bfa26bfc | 2 |
+---------+-------------+--------+--------------------------------------------+---------+
| default | | btrfs | /var/snap/lxd/common/lxd/disks/default.img | 4 |
+---------+-------------+--------+--------------------------------------------+---------+
| shared | | btrfs | 3f77ab74-ae96-4603-94c2-7566f90c4d29 | 1 |
+---------+-------------+--------+--------------------------------------------+---------+

Create a custom volume in an existing storage pool:

7
$ lxc storage volume create <pool name> <volume name>

Attach the volume to a container

$ lxc storage volume attach <pool name> <volume name> <container> data <path
to mount>

The same volume can be shared between multiple containers by attaching it to them,

provided all the containers have the same id mappings.

The below example shows how to share the same mount between two containers alp1 and

alp2 using the storage pool created above:


$ lxc storage volume create shared shared-data
$ lxc storage volume attach shared shared-data alp1 data /opt/data
$ lxc storage volume attach shared shared-data alp2 data /opt/data

Useful Links:
Stephane Graber’s articles
https://linuxcontainers.org/lxc/articles/

Mi Blog Lah! - Useful and well written articles on LXD


https://blog.simos.info/how-to-easily-run-graphics-accelerated-gui-apps-in-lxd-containers-on-
your-ubuntu-desktop/

LXD getting started


https://linuxcontainers.org/lxd/getting-started-cli/

https://www.linuxjournal.com/content/everything-you-need-know-about-linux-containers-
part-ii-working-linux-containers-lxc

For configuring and troubleshooting graphical containers look at this article:


https://blog.simos.info/how-to-run-graphics-accelerated-gui-apps-in-lxd-containers-on-your-
ubuntu-desktop/

Storage configuration

Troubleshooting:

8
Issue found with unable to create network while starting container. Make sure the veth module is

loaded by the kernel. The issue description is at:

https://github.com/lxc/lxc/issues/1604

Launching containers with GUI apps

Link for detail info: https://blog.simos.info/how-to-easily-run-graphics-accelerated-gui-apps-


in-lxd-containers-on-your-ubuntu-desktop/

1. Download the LXD profile configuration for GUI from this file:

https://blog.simos.info/wp-content/uploads/2018/06/lxdguiprofile.txt

1. Create an empty profile called ‘gui’ and update it with the configuration downloaded
above
$ lxc profile create gui
$ cat lxdguiprofile.txt | lxc profile edit gui
$ lxc profile list
+----------------------------------+---------+
| NAME | USED BY |
+----------------------------------+---------+
| gui | 0 |
+----------------------------------+---------+
$ lxc profile show gui

1. The gui profile has configuration parameters only for graphics and not networking and
storage. Therefore, while launching, use both the default and the gui profiles in that
order as shown below.
$ lxc launch --profile default --profile gui ubuntu:18.04 gui1804-1
Creating gui1804-1
Starting gui1804-1

***************** Validation required ***************

9
Using juju on Ubuntu for provisioning containers

Create a juju controller with localhost

juju bootstrap

To list all the containers provisioned by juju

lxc list juju- or juju status

To start the Juju GUI - juju gui

To deploy a sample multi-tier application with nginx, node apm and mongodb

juju deploy cs:~charmers/bundle/web-infrastructure-in-a-box

Check the reverse proxy on port 80 with http://<ip address of nginx-proxy>

Cleanup everything by destroying the controller or one at a time with -

juju destroy-controller test

10

You might also like