Update templates

This commit is contained in:
Jip-Hop 2024-02-15 17:23:57 +01:00
parent 0d742e8a90
commit dd60c6a6f6
6 changed files with 81 additions and 71 deletions

View File

@ -1,3 +1,5 @@
# Debian Docker Jail Template # Debian Docker Jail Template
## Setup
Check out the [config](./config) template file. You may provide it when asked during `jlmkr create` or, if you have the template file stored on your NAS, you may provide it directly by running `jlmkr create mydockerjail /mnt/tank/path/to/docker/config`. Check out the [config](./config) template file. You may provide it when asked during `jlmkr create` or, if you have the template file stored on your NAS, you may provide it directly by running `jlmkr create mydockerjail /mnt/tank/path/to/docker/config`.

View File

@ -1,10 +1,33 @@
# Debian Incus Jail Template (LXD / LXC / KVM) # Debian Incus Jail Template (LXD / LXC / KVM)
Check out the [config](./config) template file. You may provide it when asked during `jlmkr create` or, if you have the template file stored on your NAS, you may provide it directly by running `jlmkr create myincusjail /mnt/tank/path/to/incus/config`. Then check out [First steps with Incus](https://linuxcontainers.org/incus/docs/main/tutorial/first_steps/).
## Disclaimer ## Disclaimer
**These notes are a work in progress. Using Incus in this setup hasn't been extensively tested.** **Experimental. Using Incus in this setup hasn't been extensively tested and has [known issues](#known-issues).**
## Setup
Check out the [config](./config) template file. You may provide it when asked during `jlmkr create` or, if you have the template file stored on your NAS, you may provide it directly by running `jlmkr create myincusjail /mnt/tank/path/to/incus/config`.
Unfortunately incus doesn't want to install from the `initial_setup` script inside the config file. So we manually finish the setup by running the following after creating and starting the jail:
```bash
jlmkr exec myincusjail bash -c 'apt-get -y install incus incus-ui-canonical &&
incus admin init'
```
Follow [First steps with Incus](https://linuxcontainers.org/incus/docs/main/tutorial/first_steps/).
Then visit the Incus GUI inside the browser https://0.0.0.0:8443. To find out which IP address to use instead of 0.0.0.0, check the IP address for your jail with `jlmkr list`.
## Known Issues
Using Incus in the jail will cause the following error when starting a VM from the TrueNAS SCALE web GUI:
```
[EFAULT] internal error: process exited while connecting to monitor: Could not access KVM kernel module: Permission denied 2024-02-16T14:40:14.886658Z qemu-system-x86_64: -accel kvm: failed to initialize kvm: Permission denied
```
A reboot will resolve the issue (until you start the Incus jail again).
## Create Ubuntu Desktop VM ## Create Ubuntu Desktop VM

View File

@ -8,6 +8,7 @@ gpu_passthrough_nvidia=0
# Ensure to change eno1/br1 to the interface name you want to use # Ensure to change eno1/br1 to the interface name you want to use
# You may want to add additional options here, e.g. bind mounts # You may want to add additional options here, e.g. bind mounts
# TODO: don't use --capability=all but specify only the required capabilities # TODO: don't use --capability=all but specify only the required capabilities
# TODO: or add and use privileged flag?
systemd_nspawn_user_args=--network-macvlan=eno1 systemd_nspawn_user_args=--network-macvlan=eno1
--resolv-conf=bind-host --resolv-conf=bind-host
--capability=all --capability=all
@ -31,6 +32,8 @@ pre_start_hook=#!/usr/bin/bash
# NOTE: this script will run in the host networking namespace and ignores # NOTE: this script will run in the host networking namespace and ignores
# all systemd_nspawn_user_args such as bind mounts # all systemd_nspawn_user_args such as bind mounts
initial_setup=#!/usr/bin/bash initial_setup=#!/usr/bin/bash
set -euo pipefail
apt-get update && apt-get -y install curl
mkdir -p /etc/apt/keyrings/ mkdir -p /etc/apt/keyrings/
curl -fsSL https://pkgs.zabbly.com/key.asc -o /etc/apt/keyrings/zabbly.asc curl -fsSL https://pkgs.zabbly.com/key.asc -o /etc/apt/keyrings/zabbly.asc
sh -c 'cat <<EOF > /etc/apt/sources.list.d/zabbly-incus-stable.sources sh -c 'cat <<EOF > /etc/apt/sources.list.d/zabbly-incus-stable.sources
@ -44,7 +47,6 @@ initial_setup=#!/usr/bin/bash
EOF' EOF'
apt-get update apt-get update
apt-get -y install incus incus-ui-canonical
# You generally will not need to change the options below # You generally will not need to change the options below
systemd_run_default_args=--property=KillMode=mixed systemd_run_default_args=--property=KillMode=mixed
@ -55,10 +57,8 @@ systemd_run_default_args=--property=KillMode=mixed
--property=TasksMax=infinity --property=TasksMax=infinity
--collect --collect
--setenv=SYSTEMD_NSPAWN_LOCK=0 --setenv=SYSTEMD_NSPAWN_LOCK=0
# TODO: check if the below 2 are required # TODO: add below if required:
# --setenv=SYSTEMD_SECCOMP=0
# --property=DevicePolicy=auto # --property=DevicePolicy=auto
# TODO: add and use privileged flag?
systemd_nspawn_default_args=--keep-unit systemd_nspawn_default_args=--keep-unit
--quiet --quiet

View File

@ -1,17 +1,29 @@
# Ubuntu LXD Jail Template # Ubuntu LXD Jail Template
## Disclaimer
**Experimental. Using LXD in this setup hasn't been extensively tested and has [known issues](#known-issues).**
## Setup
Check out the [config](./config) template file. You may provide it when asked during `jlmkr create` or, if you have the template file stored on your NAS, you may provide it directly by running `jlmkr create mylxdjail /mnt/tank/path/to/lxd/config`. Check out the [config](./config) template file. You may provide it when asked during `jlmkr create` or, if you have the template file stored on your NAS, you may provide it directly by running `jlmkr create mylxdjail /mnt/tank/path/to/lxd/config`.
Unfortunately snapd doesn't want to install from the `initial_setup` script inside the config file. So we manually finish the setup by running the following after creating and starting the jail: Unfortunately snapd doesn't want to install from the `initial_setup` script inside the config file. So we manually finish the setup by running the following after creating and starting the jail:
```bash ```bash
# Repeat listing the jail until you see it has an IPv4 address
jlmkr list
# Install packages
jlmkr exec mylxdjail bash -c 'apt-get update && jlmkr exec mylxdjail bash -c 'apt-get update &&
apt-get install -y --no-install-recommends snapd && apt-get install -y --no-install-recommends snapd &&
snap install lxd' snap install lxd'
# Answer yes when asked the following: ```
# Would you like the LXD server to be available over the network? (yes/no) [default=no]: yes
# TODO: fix ZFS Choose the `dir` storage backend during `lxd init` and answer `yes` to "Would you like the LXD server to be available over the network?"
```bash
jlmkr exec mylxdjail bash -c 'lxd init && jlmkr exec mylxdjail bash -c 'lxd init &&
snap set lxd ui.enable=true && snap set lxd ui.enable=true &&
systemctl reload snap.lxd.daemon' systemctl reload snap.lxd.daemon'
@ -19,82 +31,52 @@ jlmkr exec mylxdjail bash -c 'lxd init &&
Then visit the `lxd` GUI inside the browser https://0.0.0.0:8443. To find out which IP address to use instead of 0.0.0.0, check the IP address for your jail with `jlmkr list`. Then visit the `lxd` GUI inside the browser https://0.0.0.0:8443. To find out which IP address to use instead of 0.0.0.0, check the IP address for your jail with `jlmkr list`.
## Disclaimer ## Known Issues
**These notes are a work in progress. Using Incus in this setup hasn't been extensively tested.** ### Instance creation failed
## Installation [LXD no longer has access to the LinuxContainers image server](https://discuss.linuxcontainers.org/t/important-notice-for-lxd-users-image-server/18479).
Create a debian 12 jail and [install incus](https://github.com/zabbly/incus#installation). Also install the `incus-ui-canonical` package to install the web interface. Ensure the config file looks like the below:
Run `modprobe vhost_vsock` on the TrueNAS host.
``` ```
startup=0 Failed getting remote image info: Failed getting image: The requested image couldn't be found for fingerprint "ubuntu/focal/desktop"
docker_compatible=1
gpu_passthrough_intel=1
gpu_passthrough_nvidia=0
systemd_nspawn_user_args=--network-bridge=br1 --resolv-conf=bind-host --bind=/dev/fuse --bind=/dev/kvm --bind=/dev/vsock --bind=/dev/vhost-vsock
# You generally will not need to change the options below
systemd_run_default_args=--property=KillMode=mixed --property=Type=notify --property=RestartForceExitStatus=133 --property=SuccessExitStatus=133 --property=Delegate=yes --property=TasksMax=infinity --collect --setenv=SYSTEMD_NSPAWN_LOCK=0
systemd_nspawn_default_args=--keep-unit --quiet --boot --bind-ro=/sys/module --inaccessible=/sys/module/apparmor
``` ```
Check out [First steps with Incus](https://linuxcontainers.org/incus/docs/main/tutorial/first_steps/). ### SCALE Virtual Machines
Using LXD in the jail will cause the following error when starting a VM from the TrueNAS SCALE web GUI:
## Create Ubuntu Desktop VM ```
[EFAULT] internal error: process exited while connecting to monitor: Could not access KVM kernel module: Permission denied 2024-02-16T14:40:14.886658Z qemu-system-x86_64: -accel kvm: failed to initialize kvm: Permission denied
Incus web GUI should be running on port 8443. Create new instance, call it `dekstop`, and choose the `Ubuntu jammy desktop virtual-machine ubuntu/22.04/desktop` image.
## Bind mount / virtiofs
To access files from the TrueNAS host directly in a VM created with incus, we can use virtiofs.
```bash
incus config device add desktop test disk source=/home/test/ path=/mnt/test
``` ```
The command above (when ran as root user inside the incus jail) adds a new virtiofs mount of a test directory inside the jail to a VM named desktop. The `/home/test` dir resides in the jail, but you can first bind mount any directory from the TrueNAS host inside the incus jail and then forward this to the VM using virtiofs. This could be an alternative to NFS mounts. A reboot will resolve the issue (until you start the LXD jail again).
### Benchmarks ### ZFS Issues
#### Inside LXD ubuntu desktop VM with virtiofs mount If you create a new dataset on your pool (e.g. `tank`) called `lxd` from the TrueNAS SCALE web GUI and tell LXD to use it during `lxd init`, then you will run into issues. Firstly you'd have to run `apt-get install -y --no-install-recommends zfsutils-linux` inside the jail to install the ZFS userspace utils and you've have to add `--bind=/dev/zfs` to the `systemd_nspawn_user_args` in the jail config. By mounting `/dev/zfs` into this jail, **it will have total control of the storage on the host!**
root@desktop:/mnt/test# mount | grep test
incus_test on /mnt/test type virtiofs (rw,relatime)
root@desktop:/mnt/test# time iozone -a
[...]
real 2m22.389s
user 0m2.222s
sys 0m59.275s
#### In a jailmaker jail on the host: But then SCALE doesn't seem to like the ZFS datasets created by LXD. I get the following errors when browsing the sub-datasets:
root@incus:/home/test# time iozone -a
[...]
real 0m59.486s
user 0m1.468s
sys 0m25.458s
#### Inside LXD ubuntu desktop VM with virtiofs mount ```
root@desktop:/mnt/test# dd if=/dev/random of=./test1.img bs=1G count=1 oflag=dsync [EINVAL] legacy: path must be absolute
1+0 records in ```
1+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 36.321 s, 29.6 MB/s
#### In a jailmaker jail on the host: ```
root@incus:/home/test# dd if=/dev/random of=./test2.img bs=1G count=1 oflag=dsync [EFAULT] Failed retreiving USER quotas for tank/lxd/virtual-machines
1+0 records in ```
1+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 7.03723 s, 153 MB/s
## Create Ubuntu container As long as you don't operate on these datasets in the SCALE GUI this may not be a real problem...
To be able to create unprivileged (rootless) containers with incus inside the jail, you need to increase the amount of UIDs available inside the jail. Please refer to the [Podman instructions](../podman/README.md) for more information. If you don't increase the UIDs you can only create privileged containers. You'd have to change `Privileged` to `Allow` in `Security policies` in this case. However, creating an LXD VM doesn't work with the ZFS storage backend (creating a container works though):
## Canonical LXD install via snap ```
Failed creating instance from image: Could not locate a zvol for tank/lxd/images/1555b13f0e89bfcf516bd0090eee6f73a0db5f4d0d36c38cae94316de82bf817.block
```
Installing the lxd snap is an alternative to Incus. But out of the box running `snap install lxd` will cause AppArmor issues when running inside a jailmaker jail on SCALE. Could this be the same issue as [Instance creation failed](#instance-creation-failed)?
## More info
Refer to the [Incus README](../incus/README.md) as a lot of it applies to LXD too.
## References ## References

View File

@ -8,7 +8,8 @@ gpu_passthrough_nvidia=0
# Ensure to change eno1/br1 to the interface name you want to use # Ensure to change eno1/br1 to the interface name you want to use
# You may want to add additional options here, e.g. bind mounts # You may want to add additional options here, e.g. bind mounts
# TODO: don't use --capability=all but specify only the required capabilities # TODO: don't use --capability=all but specify only the required capabilities
systemd_nspawn_user_args=--network-macvlan=eno1 # TODO: or add and use privileged flag?
systemd_nspawn_user_args=--network-bridge=br1
--resolv-conf=bind-host --resolv-conf=bind-host
--capability=all --capability=all
--bind=/dev/fuse --bind=/dev/fuse
@ -29,8 +30,9 @@ pre_start_hook=#!/usr/bin/bash
# NOTE: this script will run in the host networking namespace and ignores # NOTE: this script will run in the host networking namespace and ignores
# all systemd_nspawn_user_args such as bind mounts # all systemd_nspawn_user_args such as bind mounts
initial_setup=#!/usr/bin/bash initial_setup=#!/usr/bin/bash
set -euo pipefail
# https://discuss.linuxcontainers.org/t/snap-inside-privileged-lxd-container/13691/8 # https://discuss.linuxcontainers.org/t/snap-inside-privileged-lxd-container/13691/8
ln -s /bin/true /usr/local/bin/udevadm ln -sf /bin/true /usr/local/bin/udevadm
# You generally will not need to change the options below # You generally will not need to change the options below
systemd_run_default_args=--property=KillMode=mixed systemd_run_default_args=--property=KillMode=mixed
@ -41,8 +43,7 @@ systemd_run_default_args=--property=KillMode=mixed
--property=TasksMax=infinity --property=TasksMax=infinity
--collect --collect
--setenv=SYSTEMD_NSPAWN_LOCK=0 --setenv=SYSTEMD_NSPAWN_LOCK=0
# TODO: check if the below 2 are required # TODO: add below if required:
# --setenv=SYSTEMD_SECCOMP=0
# --property=DevicePolicy=auto # --property=DevicePolicy=auto
systemd_nspawn_default_args=--keep-unit systemd_nspawn_default_args=--keep-unit

View File

@ -1,12 +1,14 @@
# Fedora Podman Jail Template # Fedora Podman Jail Template
## Setup
Check out the [config](./config) template file. You may provide it when asked during `jlmkr create` or, if you have the template file stored on your NAS, you may provide it directly by running `jlmkr create mypodmanjail /mnt/tank/path/to/podman/config`. Check out the [config](./config) template file. You may provide it when asked during `jlmkr create` or, if you have the template file stored on your NAS, you may provide it directly by running `jlmkr create mypodmanjail /mnt/tank/path/to/podman/config`.
## Rootless ## Rootless
### Disclaimer ### Disclaimer
**These notes are a work in progress. Using podman in this setup hasn't been extensively tested.** **Experimental. Using podman in this setup hasn't been extensively tested.**
### Installation ### Installation