Rootless Podman storage driver

I was breaking my head over a very weird Jenkins error lately. This post goes in depth on the default storage driver (VFS) for rootless Podman.

#podman #jenksin #cicd

My Jenkins setup in the homelab

Recently I wanted to setup a LaTex pipeline in my Jenkins. During this task I noticed, that certain images fail with a particular error message, that was not meaningful at all (in hindsight).

The Nomad cloud plugin for Jenkins starts Jenkins workers as Nomad jobs. These jobs are started from an image that includes the Jenkins inbound agent. This agent connects to the Jenkins server.

Alongside the Jenkins agent, I also install Buildah and Docker on that image, because I use these Jenkins workers to build container images.

Besides building images (with Buildah), I also use the Docker workflow to run arbitrary container images in my Jenkins pipelines.

The agent image mounts the Podman socket of the Nomad node. It does not mount the socket of the β€œroot” Podman process, only the rootless socket of a particular jenkins user on the nodes.

I reserved this users on the nodes (my Raspberry Pies) for this particular purpose.

In that sense, whenever I run an image with the Docker workflow plugin in my Jenkins pipeline, it is started as rootless Podman container under that jenkins user on the Nomad node where the pipeline worker is scheduled.

VFS default storage driver for rootless Podman

I only noticed today, that even though I have a more recent kernel than 5.12.9 (I run 6.12.41), the rootless Podman configuration does not check the kernel version but simply defaults to the VFS storage.

The default storage driver for UID 0 is configured in containers-storage.conf(5) in rootless mode), and is vfs for non-root users when fuse-overlayfs is not available.

I recently uninstall fuse-overlayfs from my Pies, because I run a recent kernel 5.12.9 that supports native overlayfs.

Improve performance for Docker workflow in Jenkins

In order to achieve better performance and fix the weird and unclear error in my Jenkins pipeline that use Docker workflow, I had to migrate to overlayfs storage driver for the rootless Podman socket (of the jenkins user).

As jenkins user on the nodes, cleanup the old vfs storage:

$ podman system reset

Also had to help a bit (as root):

# rm -rf /home/jenkins/.local/share/containers/

Then change the the storage driver setting (I changed it globally in /etc/containers/storage.conf because my root Podman process already runs with overlayfs storage driver anyways, so it's basically a global default now).

# Configure overlay storage for Podman
cat <<EOF > /etc/containers/storage.conf
[storage]
driver="overlay"
runroot = "/var/run/containers/storage"
graphroot = "/var/lib/containers/storage"
EOF

This explicit setting now also holds for the rootless Podman socket. I was not aware of that. It can be checked with the jenkins user (rootless Podman):

$ podman info | grep graph
  graphDriverName: overlay

Really glad I got this fixed.

Let me know your Podman/Buildah/Jenkins stories on the Fediverse or via chat.

πŸ›œ RSS | 🐘 Fediverse | πŸ’¬ XMPP