Setup Incus

1. Install

Incus can normally be installed from a package, like this:

apt install --yes incus

# or, for debian 12 (bookworm):
# apt install --yes incus/bookworm-backports
We can also install it from the Zabbly package repository:
  1. Get the key of the repository:

    mkdir -p /etc/apt/keyrings/
    curl -fsSL https://pkgs.zabbly.com/key.asc \
         -o /etc/apt/keyrings/zabbly.asc
  2. Add the package repository to the list:

    cat <<EOF > /etc/apt/sources.list.d/zabbly-incus.sources
    Enabled: yes
    Types: deb
    URIs: https://pkgs.zabbly.com/incus/stable
    Suites: $(. /etc/os-release && echo ${VERSION_CODENAME})
    Components: main
    Architectures: $(dpkg --print-architecture)
    Signed-By: /etc/apt/keyrings/zabbly.asc
    EOF
    
    cat /etc/apt/sources.list.d/zabbly-incus.sources
  3. Install the package:

    apt update
    apt install --yes incus
    
    incus --version
    incus ls

We also need to install btrfs-progs, since we are going to use a Btrfs storage backend with incus:

apt install --yes btrfs-progs

2. Initialize

Before we can create containers, we need to initialize Incus: incus admin init

The output of this command looks like this:
Would you like to use clustering? (yes/no) [default=no]:
Do you want to configure a new storage pool? (yes/no) [default=yes]:
Name of the new storage pool [default=default]:
Name of the storage backend to use (btrfs, dir) [default=btrfs]: btrfs
Create a new BTRFS pool? (yes/no) [default=yes]:
Would you like to use an existing empty block device (e.g. a disk or partition)? (yes/no) [default=no]: yes
Path to the existing block device: /dev/nvme1n1
Would you like to create a new local network bridge? (yes/no) [default=yes]:
What should the new bridge be called? [default=incusbr0]:
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: none
Would you like the server to be available over the network? (yes/no) [default=no]:
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]:
Would you like a YAML "init" preseed to be printed? (yes/no) [default=no]:

We use the default answers for almost all the questions. The important questions are these:

  • Name of the storage backend to use: btrfs

    We are using btrfs for the storage backend, because we need to install Docker inside the container, and only this filesystem supports it efficiently. Besides, deduplication features of Btrfs make it possible to use less disk space (or to use the disk space more efficiently).

  • Would you like to use an existing empty block device? yes

    We are using the second disk as a storage for the Incus containers.

  • Path to the existing block device: /dev/nvme1n1

  • What IPv6 address should be used? none

    We are disabling IPv6 for the containers; we don’t need it.

3. Networking

The connection of the Incus containers to the Internet goes through the host. The bridge network incusbr0 can be thought as a switch, which provides DHCP service for the containers that are connected to it. It also works as a gateway for them and provides NAT.

3.1. Fix the firewall

In firewalld (that is installed on the server), any interface that is not explicitly added to a zone, is added to the default zone, which is the zone public. This zone is meant for the interfaces that are facing the public internet, so it has restrictions.

The bridge interface of Incus (incusbr0) is also added by default to the restricted public zone. As a result, DHCP requests are blocked, and the containers cannot get an IP.

Test networking in a container

If we create a test container, we will notice that the network in the container is not working:

incus launch images:ubuntu/22.04 u22
incus ls
incus exec u22 -- ip addr

The container did not get an IP, as it normally should.

However, if you stop firewalld and restart the container, everything works fine.

systemctl status firewalld
systemctl stop firewalld

incus restart u22

incus ls
incus exec u22 -- ip addr
incus exec u22 -- ping 8.8.8.8

systemctl start firewalld
systemctl status firewalld

So the problem is that the firewall is not configured properly.

Make sure that IP forwarding is enabled

By the way, IP forwarding should already be enabled in the kernel of the host:

sysctl net.ipv4.ip_forward
cat /proc/sys/net/ipv4/ip_forward

If it is not, enable it like this:

echo 'net.ipv4.ip_forward = 1' >> /etc/sysctl.conf
sysctl -p

Let’s fix this problem by adding the bridge interface to the trusted zone, where everything is allowed:

firewall-cmd --zone=trusted --list-all
firewall-cmd --zone=trusted \
        --add-interface=incusbr0 --permanent
firewall-cmd --reload
firewall-cmd --zone=trusted --list-all
Let’s check that it is working:
incus restart u22
incus ls
incus exec u22 -- ip addr
incus exec u22 -- ping 8.8.8.8

If the ping is still not working, usually the problem is that forwarding is blocked. If you try iptables-save | head and see something like this: :FORWARD DROP [4:2508], it means that the policy for the FORWARD chain is DROP. Maybe it is set by Docker, if it is installed.

We can make the default policy ACCEPT, like this: iptables -P FORWARD ACCEPT. However, the next time that the server is rebooted, or firewalld restarted, we may loose this configuration.

Let’s make sure that forwarding is enabled:

firewall-cmd --permanent --direct --add-rule \
    ipv4 filter FORWARD 0 -j ACCEPT
firewall-cmd --reload

firewall-cmd --direct --get-all-rules
More specific forwarding rules

The rules above will enable (ACCEPT) forwarding for all the interfaces, the current ones and the ones that will be created in the future.

If this is not what you want, you can use more specific rules, like these:

firewall-cmd --permanent --direct --remove-rule \
    ipv4 filter FORWARD 0 -j ACCEPT

firewall-cmd --permanent --direct --add-rule \
    ipv4 filter -i incusbr0 FORWARD 0 -j ACCEPT
firewall-cmd --permanent --direct --add-rule \
    ipv4 filter -o incusbr0 FORWARD 0 -j ACCEPT

firewall-cmd --reload
firewall-cmd --direct --get-all-rules
Clean up

Let’s test again and then clean up the test container:

incus exec u22 -- ping 8.8.8.8

incus stop u22
incus rm u22
incus ls

3.2. Limit DHCP range

Incus containers usually get an automatic IP from the DHCP that is provided by incusbr0. Sometimes we need to set a fixed IP to some containers. To avoid any possible IP conflicts between the fixed IP and the automatic IPs issued by DHCP, we should limit the range of the DHCP IPs, and make sure that the fixed IPs are outside the DHCP range.

Let’s modify the DHCP range on the configuration of incusbr0:

incus network show incusbr0
incus network get incusbr0 ipv4.address

incus network set incusbr0 \
    ipv4.dhcp.ranges 10.148.0.2-10.148.0.200

incus network get incusbr0 ipv4.dhcp.ranges
incus network show incusbr0

3.3. Port forwarding

Sometimes we need to forward ports from host to one of the containers. We can use the command incus network forward for this.

The container shoul have a fixed IP.
How to set a fixed IP in a Debian container
IP=10.148.0.201/24
GW=10.148.0.1

cat <<EOF > /etc/systemd/network/eth0.network
[Match]
Name=eth0

[Address]
Address=$IP

[Route]
Gateway=$GW

[Network]
DHCP=no
DNS=8.8.8.8
EOF

cat /etc/systemd/network/eth0.network
systemctl restart systemd-networkd

ip addr
ip ro
ping google.com
How to set a fixed IP in a Ubuntu container
apt purge cloud-init
rm /etc/netplan/50-cloud-init.yaml

IP=10.148.0.201/24
GW=10.148.0.1
cat <<EOF > /etc/netplan/01-netcfg.yaml
network:
  version: 2
  ethernets:
    eth0:
      dhcp4: no
      addresses:
        - $IP
      nameservers:
        addresses: [8.8.8.8, 8.8.4.4]
      routes:
        - to: default
          via: $GW
EOF

chmod 600 /etc/netplan/01-netcfg.yaml
cat /etc/netplan/01-netcfg.yaml

netplan apply

ip address
ip route
ping 8.8.8.8
Some examples
HOST_IP=10.11.12.13           # the public IP of the host
CONTAINER_IP=10.148.0.201

# create forwarding table for 'incusbr0' and HOST_IP
incus network forward create incusbr0 $HOST_IP
incus network forward list incusbr0
# forward the TCP ports 25,465,587,110,143,993,995
incus network forward port add incusbr0 \
    $HOST_IP tcp 25,465,587,110,143,993,995 \
    $CONTAINER_IP
incus network forward show incusbr0 $HOST_IP
# forward 2201 on HOST_IP to 22 on CONTAINER_IP
incus network forward port add incusbr0 \
    $HOST_IP tcp 2201 \
    $CONTAINER_IP 22
incus network forward show incusbr0 $HOST_IP
# forward the UDP port 3478 and the range 50001-65535
incus network forward port add incusbr0 \
    $HOST_IP udp 3478,50001-65535 \
    $CONTAINER_IP
incus network forward show incusbr0 $HOST_IP
How to test port forwarding

We can use netcat to test that ports are forwarded correctly. On the server run:

NAME=test1
incus exec $NAME -- apt install --yes netcat-openbsd
incus exec $NAME -- nc -l 110

Outside the server run:

nc $HOST_IP 110

Every line that is typed outside the server should be displayed inside the server, and vice-versa.

Use the option -u for testing UDP ports:

incus exec $NAME -- nc -u -l 65535
nc -u $HOST_IP 65535

4. Access

4.1. From non-root user

To access Incus from a user other than root, add that user to the group incus:

adduser user1
adduser user1 incus

In this case user1 can use Incus, but is limited only to his own workspace (cannot access containers of other users).

If you add user1 to the group incus-admin instead, it is going have full access to the Incus server (same access as root):

adduser user1 incus-admin

4.2. Remotely

We can connect to the Incus server from a local machine, for example a laptop, and manage it remotely.

  1. On the local machine, install Incus (but don’t initialize it with incus admin init).

  2. On the server, allow access to Incus on port 8443:

    incus config set core.https_address :8443
  3. Make sure that the port 8443 on the server is open:

    firewall-cmd --zone=public --add-port=8443/tcp --permanent
    firewall-cmd --reload
    firewall-cmd --zone=public --list-ports
  4. On the server, generate a trust token for client1:

    incus config trust add client1
  5. On the local machine, add a remote, like this:

    incus remote add server1 11.12.13.14

    This will prompt you to confirm the remote server fingerprint and then ask you for the token (that was generated above).

    11.12.13.14 is the public IP of the server.
  6. Make the remote named server1 the default one, and test it:

    incus remote ls
    incus remote switch server1
    incus remote ls
    incus ls

    Now all the incus commands on the local laptop will be executed by default on the remote Incus server.

  7. On the local machine (laptop) install also virt-viewer (or spice-client-gtk), which is needed to access the VGA console of the virtual machines:

    apt install virt-viewer
    Be aware that without Xpra, the GUI interface displayed by virt-viewer is not smooth but slow and lagish.

5. Manage Btrfs

Incus uses the second disk (/dev/nvme1n1) as a storage, which is formated with Btrfs. This filesystem is the most suitable and efficient for our needs (because it supports copy-on-write, deduplication, etc.) However it is a little bit diffrent from the traditional ext2/ext3/ext4, and we may have some problems if we don’t know how to manage it properly.

5.1. Disable quotas

Quotas are usually used to restrict the size of home directories of users. In our server we don’t have multiple users, so we don’t need them. Besides, quotas in Btrfs, in the current implementation, may cause high CPU utilization and performance issues, especially when creating or deleting snapshots.

So, let’s make sure to disable them:

# mount the disk used as a storage by incus
mkdir mnt
mount /dev/nvme1n1 mnt
ls mnt/

# disable quotas
btrfs qgroup show mnt/
btrfs quota disable mnt/
btrfs qgroup show mnt/

# unmount
umount mnt
ls mnt/
rmdir mnt/

5.2. Balance

5.3. Deduplicate