<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>cicd &amp;mdash; Jerry of the Week</title>
    <link>https://write.in0rdr.ch/tag:cicd</link>
    <description>ˈdʒɛri - Individual who sends life against the grain no matter the consequences</description>
    <pubDate>Tue, 28 Apr 2026 13:00:06 +0000</pubDate>
    <item>
      <title>Rootless Podman storage driver</title>
      <link>https://write.in0rdr.ch/rootless-podman-storage-driver</link>
      <description>&lt;![CDATA[I was breaking my head over a very weird Jenkins error lately. This post goes in depth on the default storage driver (VFS) for rootless Podman.&#xA;&#xA;#podman #jenksin #cicd&#xA;!--more--&#xA;&#xA;My Jenkins setup in the homelab&#xA;Recently I wanted to setup a LaTex pipeline in my Jenkins. During this task I noticed, that certain images fail with a particular error message, that was not meaningful at all (in hindsight).&#xA;&#xA;The Nomad cloud plugin for Jenkins starts Jenkins workers as Nomad jobs. These jobs are started from an image that includes the Jenkins inbound agent. This agent connects to the Jenkins server.&#xA;&#xA;Alongside the Jenkins agent, I also install Buildah and Docker on that image, because I use these Jenkins workers to build container images.&#xA;&#xA;Besides building images (with Buildah), I also use the Docker workflow to run arbitrary container images in my Jenkins pipelines.&#xA;&#xA;The agent image mounts the Podman socket of the Nomad node. It does not mount the socket of the &#34;root&#34; Podman process, only the rootless socket of a particular jenkins user on the nodes.&#xA;&#xA;I reserved this users on the nodes (my Raspberry Pies) for this particular purpose.&#xA;&#xA;In that sense, whenever I run an image with the Docker workflow plugin in my Jenkins pipeline, it is started as rootless Podman container under that jenkins user on the Nomad node where the pipeline worker is scheduled.&#xA;&#xA;VFS default storage driver for rootless Podman&#xA;I only noticed today, that even though I have a more recent kernel than 5.12.9 (I run 6.12.41), the rootless Podman configuration does not check the kernel version but simply defaults to the VFS storage.&#xA;&#xA;  The default storage driver for UID 0 is configured in containers-storage.conf(5) in rootless mode), and is vfs for non-root users when fuse-overlayfs is not available.&#xA;&#xA;I recently uninstall fuse-overlayfs from my Pies, because I run a recent kernel 5.12.9 that supports native overlayfs.&#xA;&#xA;Improve performance for Docker workflow in Jenkins&#xA;In order to achieve better performance and fix the weird and unclear error in my Jenkins pipeline that use Docker workflow, I had to migrate to overlayfs storage driver for the rootless Podman socket (of the jenkins user).&#xA;&#xA;As jenkins user on the nodes, cleanup the old vfs storage:&#xA;$ podman system reset&#xA;&#xA;Also had to help a bit (as root):&#xA;rm -rf /home/jenkins/.local/share/containers/&#xA;&#xA;Then change the the storage driver setting (I changed it globally in /etc/containers/storage.conf because my root Podman process already runs with overlayfs storage driver anyways, so it&#39;s basically a global default now).&#xA;&#xA;Configure overlay storage for Podman&#xA;cat &lt;EOF  /etc/containers/storage.conf&#xA;[storage]&#xA;driver=&#34;overlay&#34;&#xA;runroot = &#34;/var/run/containers/storage&#34;&#xA;graphroot = &#34;/var/lib/containers/storage&#34;&#xA;EOF&#xA;&#xA;This explicit setting now also holds for the rootless Podman socket. I was not aware of that. It can be checked with the jenkins user (rootless Podman):&#xA;&#xA;$ podman info | grep graph&#xA;  graphDriverName: overlay&#xA;&#xA;Really glad I got this fixed.&#xA;&#xA;Let me know your Podman/Buildah/Jenkins stories on the Fediverse or via chat.&#xA;&#xA;div style=&#34;text-align:center; font-size: 0.8em&#34;&#xD;&#xA;a href=&#34;https://write.in0rdr.ch/feed&#34;&amp;#128732; RSS/a | a href=&#34;https://m.in0rdr.ch/in0rdr&#34;&amp;#128024; Fediverse/a | a href=&#34;https://chat.in0rdr.ch/#/guest?join=p0c@conference.in0rdr.ch&#34;&amp;#128172; XMPP/a&#xD;&#xA;/div]]&gt;</description>
      <content:encoded><![CDATA[<p>I was breaking my head over a <a href="https://community.jenkins.io/t/interruptedexception-with-docker-worfklow-plugin-and-large-images/35550">very weird Jenkins error</a> lately. This post goes in depth on the default storage driver (VFS) for rootless Podman.</p>

<p><a href="https://write.in0rdr.ch/tag:podman" class="hashtag"><span>#</span><span class="p-category">podman</span></a> <a href="https://write.in0rdr.ch/tag:jenksin" class="hashtag"><span>#</span><span class="p-category">jenksin</span></a> <a href="https://write.in0rdr.ch/tag:cicd" class="hashtag"><span>#</span><span class="p-category">cicd</span></a>
</p>

<h2 id="my-jenkins-setup-in-the-homelab">My Jenkins setup in the homelab</h2>

<p>Recently I wanted to setup a LaTex pipeline in my Jenkins. During this task I noticed, that certain images fail with a <a href="https://community.jenkins.io/t/interruptedexception-with-docker-worfklow-plugin-and-large-images/35550">particular error message</a>, that was not meaningful at all (in hindsight).</p>

<p>The <a href="https://github.com/jenkinsci/nomad-plugin">Nomad cloud plugin</a> for Jenkins starts Jenkins workers as Nomad jobs. These jobs are started from an image that includes the <a href="https://github.com/jenkinsci/remoting/tree/master">Jenkins inbound agent</a>. This agent connects to the Jenkins server.</p>

<p>Alongside the Jenkins agent, I <a href="https://code.in0rdr.ch/nomad/file/docker/docker-jenkins-inbound-agent/Dockerfile.html">also install Buildah and Docker</a> on that image, because I use these Jenkins workers to build container images.</p>

<p>Besides building images (with Buildah), I also use the <a href="https://github.com/jenkinsci/docker-workflow-plugin/tree/master">Docker workflow</a> to run arbitrary container images in my Jenkins pipelines.</p>

<p>The agent image <a href="https://code.in0rdr.ch/nomad/file/hcl/default/jenkins/templates/jenkins.yaml.tmpl.html">mounts the Podman socket of the Nomad node</a>. It does not mount the socket of the “root” Podman process, only the rootless socket of a particular <code>jenkins</code> user on the nodes.</p>

<p>I reserved this users on the nodes (my Raspberry Pies) for this particular purpose.</p>

<p>In that sense, whenever I run an image with the Docker workflow plugin in my Jenkins pipeline, it is started as rootless Podman container under that <code>jenkins</code> user on the Nomad node where the pipeline worker is scheduled.</p>

<h2 id="vfs-default-storage-driver-for-rootless-podman">VFS default storage driver for rootless Podman</h2>

<p>I only noticed today, that even though I have a <a href="https://docs.podman.io/en/latest/markdown/podman.1.html#note-unsupported-file-systems-in-rootless-mode">more recent kernel than 5.12.9</a> (I run 6.12.41), the rootless Podman configuration does not check the kernel version but simply <a href="https://docs.podman.io/en/latest/markdown/podman.1.html#storage-driver-value">defaults to the VFS storage</a>.</p>

<blockquote><p>The default storage driver for UID 0 is configured in containers-storage.conf(5) in rootless mode), and is vfs for non-root users when fuse-overlayfs is not available.</p></blockquote>

<p>I recently uninstall fuse-overlayfs from my Pies, because I run a recent kernel 5.12.9 that supports <a href="https://www.redhat.com/en/blog/podman-rootless-overlay">native overlayfs</a>.</p>

<h2 id="improve-performance-for-docker-workflow-in-jenkins">Improve performance for Docker workflow in Jenkins</h2>

<p>In order to <a href="https://github.com/containers/podman/blob/main/docs/tutorials/performance.md#choosing-a-storage-driver">achieve better performance</a> and fix <a href="https://community.jenkins.io/t/interruptedexception-with-docker-worfklow-plugin-and-large-images/35550">the weird and unclear error</a> in my Jenkins pipeline that use Docker workflow, I had to <a href="https://docs.podman.io/en/latest/markdown/podman-system-reset.1.html#switching-rootless-user-from-vfs-driver-to-overlay-with-fuse-overlayfs">migrate</a> to overlayfs storage driver for the rootless Podman socket (of the <code>jenkins</code> user).</p>

<p>As <code>jenkins</code> user on the nodes, cleanup the old vfs storage:</p>

<pre><code class="language-bash">$ podman system reset
</code></pre>

<p>Also had to help a bit (as root):</p>

<pre><code class="language-bash"># rm -rf /home/jenkins/.local/share/containers/
</code></pre>

<p>Then change the the storage driver setting (I changed it globally in <code>/etc/containers/storage.conf</code> because my root Podman process already runs with overlayfs storage driver anyways, so it&#39;s basically a global default now).</p>

<pre><code class="language-bash"># Configure overlay storage for Podman
cat &lt;&lt;EOF &gt; /etc/containers/storage.conf
[storage]
driver=&#34;overlay&#34;
runroot = &#34;/var/run/containers/storage&#34;
graphroot = &#34;/var/lib/containers/storage&#34;
EOF
</code></pre>

<p>This <strong>explicit setting</strong> now also holds for the rootless Podman socket. I was not aware of that. It can be checked with the <code>jenkins</code> user (rootless Podman):</p>

<pre><code class="language-bash">$ podman info | grep graph
  graphDriverName: overlay
</code></pre>

<p>Really glad I got this fixed.</p>

<p>Let me know your Podman/Buildah/Jenkins stories on the Fediverse or via chat.</p>

<div style="text-align:center; font-size: 0.8em">
<a href="https://write.in0rdr.ch/feed">🛜 RSS</a> | <a href="https://m.in0rdr.ch/in0rdr">🐘 Fediverse</a> | <a href="https://chat.in0rdr.ch/#/guest?join=p0c@conference.in0rdr.ch">💬 XMPP</a>
</div>
]]></content:encoded>
      <guid>https://write.in0rdr.ch/rootless-podman-storage-driver</guid>
      <pubDate>Sat, 06 Sep 2025 19:33:21 +0000</pubDate>
    </item>
    <item>
      <title>Jenkins works</title>
      <link>https://write.in0rdr.ch/jenkins-works</link>
      <description>&lt;![CDATA[I was dabbling around with Jenkins in my Nomad cluster lately. In this post I quickly share my experiences and what I learned along the way.&#xA;&#xA;#cicd #coding #jenkins #nomad&#xA;!--more--&#xA;&#xA;You may ask why I spend time to setup a good old Jenkins while everyone else seems to jump on some newer CI systems (GitLab, Forgejo, etc.)? Well, as said, it&#39;s still good old Jenkins and I assume it will be around for some more time.&#xA;&#xA;To run Jenkins agents on a Nomad cluster I followed Ola Ogunseghas instructions and example code here:&#xA;&#xA;https://faun.pub/jenkins-build-agents-on-nomad-workers-626b0df4fc57&#xA;https://github.com/GastroGee/jenkins-nomad/blob/main/jenkins-controller/nomad.yaml&#xA;&#xA;It involves installing the Nomad plugin for Jenkins and configuring this &#34;Nomad cloud&#34; (that&#39;s how the integration is called in Jenkins) with a template for the Jenkins agents.&#xA;&#xA;Obviously, I wanted to integrate Jenkins with my Git repos. The most straight forward way seemed to use post-receive hooks to nudge Jenkins on every push. This has worked fabulously so far.&#xA;&#xA;Even though the runners are spawned as Nomad jobs, I still wanted to run other Docker containers in the pipeline. This is where it got very confusing for me, because of the different Docker plugins for Jenkins. Most notably, there exist at least these two plugins:&#xA;&#xA;docker-workflow, runs the Docker container from the Jenkins agent&#xA;docker-plugin, runs Jenkins agents as Docker containers&#xA;&#xA;I decided to go with the former &#34;docker-workflow&#34; plugin because I already deployed the Jenkins agents as Nomad jobs. docker-workflow can run arbitrary containers from any Docker image, whereas the docker-plugin needs to be based on the Jenkins inbound-agent image to be able to connect to the Jenkins server.&#xA;&#xA;I wanted the containers that are launched from Jenkins to be contained in a another users namespace. There exists the option to rebuild the inbound-agent with user-supplied attributes for uid/gid, but since I wanted to modify the image anyways I simply forked and built my own agent images, mostly inspired by the example for running Docker inside the agent.&#xA;&#xA;At this stage, my tooling was sophisticated enough I could run a simple gitleaks container on each push to scan for secrets. I&#39;m always afraid to publish secrets accidentally (it happened to me before).&#xA;&#xA;Furthermore, I needed to establish some kind of build process, so I can also build images with Jenkins and directly push them to my local image registry. After some reading, I discarded the thought to use Kaniko because it still requires nodes in the native architecture of the respective build for multi-architecture builds.&#xA;&#xA;Therefore, I followed RedHat best-practices to integrate Buildah, which is also the tool I use to build multi-arch container images locally on my laptop.&#xA;&#xA;If you are interested in more example code, here the link to some of the key components:&#xA;&#xA;jenkins.yaml.tmpl.html Jenkins infrastructure code&#xA;jenkins.nomad.html Jenkins nomad job&#xA;docker-agent Modified Jenkins inbound-agent with Docker and Buildah&#xA;&#xA;I&#39;m not finished with playing around, because I still haven&#39;t figured out some things fully yet (e.g., how to properly use jlink in multi-arch builds) and because I&#39;m also curious how other members of the community use Jenkins on Nomad.&#xA;&#xA;div style=&#34;text-align:center; font-size: 0.8em&#34;&#xD;&#xA;a href=&#34;https://write.in0rdr.ch/feed&#34;&amp;#128732; RSS/a | a href=&#34;https://m.in0rdr.ch/in0rdr&#34;&amp;#128024; Fediverse/a | a href=&#34;https://chat.in0rdr.ch/#/guest?join=p0c@conference.in0rdr.ch&#34;&amp;#128172; XMPP/a&#xD;&#xA;/div]]&gt;</description>
      <content:encoded><![CDATA[<p>I was dabbling around with Jenkins in my Nomad cluster lately. In this post I quickly share my experiences and what I learned along the way.</p>

<p><a href="https://write.in0rdr.ch/tag:cicd" class="hashtag"><span>#</span><span class="p-category">cicd</span></a> <a href="https://write.in0rdr.ch/tag:coding" class="hashtag"><span>#</span><span class="p-category">coding</span></a> <a href="https://write.in0rdr.ch/tag:jenkins" class="hashtag"><span>#</span><span class="p-category">jenkins</span></a> <a href="https://write.in0rdr.ch/tag:nomad" class="hashtag"><span>#</span><span class="p-category">nomad</span></a>
</p>

<p>You may ask why I spend time to setup a good old Jenkins while everyone else seems to jump on some newer CI systems (GitLab, Forgejo, etc.)? Well, as said, it&#39;s still good old Jenkins and I assume it will be around for some more time.</p>

<p>To run Jenkins agents on a Nomad cluster I followed Ola Ogunseghas instructions and example code here:</p>
<ul><li><a href="https://faun.pub/jenkins-build-agents-on-nomad-workers-626b0df4fc57">https://faun.pub/jenkins-build-agents-on-nomad-workers-626b0df4fc57</a></li>
<li><a href="https://github.com/GastroGee/jenkins-nomad/blob/main/jenkins-controller/nomad.yaml">https://github.com/GastroGee/jenkins-nomad/blob/main/jenkins-controller/nomad.yaml</a></li></ul>

<p>It involves installing the <a href="https://plugins.jenkins.io/nomad">Nomad plugin for Jenkins</a> and configuring this “Nomad cloud” (that&#39;s how the integration is called in Jenkins) with a template for the Jenkins agents.</p>

<p>Obviously, I wanted to integrate Jenkins with my <a href="https://code.in0rdr.ch">Git repos</a>. The most straight forward way seemed to use <a href="https://plugins.jenkins.io/git/#plugin-content-push-notification-from-repository">post-receive hooks</a> to nudge Jenkins on every push. This has worked fabulously so far.</p>

<p>Even though the runners are spawned as Nomad jobs, I still wanted to run other Docker containers in the pipeline. This is where it got very confusing for me, because of the different Docker plugins for Jenkins. Most notably, there exist at least these two plugins:</p>
<ul><li><a href="https://plugins.jenkins.io/docker-workflow">docker-workflow</a>, runs the Docker container from the Jenkins agent</li>
<li><a href="https://plugins.jenkins.io/docker-plugin">docker-plugin</a>, runs Jenkins agents as Docker containers</li></ul>

<p>I decided to go with the former “docker-workflow” plugin because I already deployed the Jenkins agents as Nomad jobs. <code>docker-workflow</code> can run arbitrary containers from any Docker image, whereas the <code>docker-plugin</code> needs to be based on the <a href="https://hub.docker.com/r/jenkins/inbound-agent">Jenkins <code>inbound-agent</code> image</a> to be able to connect to the Jenkins server.</p>

<p>I wanted the containers that are launched from Jenkins to be contained in a another users namespace. There exists the option to rebuild the <code>inbound-agent</code> with user-supplied attributes for uid/gid, but since I wanted to modify the image anyways I simply forked and <a href="https://github.com/jenkinsci/docker-agent/compare/master...in0rdr:docker-agent:debug/podman_x86_64?diff=unified">built my own agent</a> images, mostly inspired by the <a href="https://github.com/jenkinsci/docker-inbound-agents/blob/master/docker/Dockerfile">example for running Docker</a> inside the agent.</p>

<p>At this stage, my tooling was sophisticated enough I could run a simple <a href="https://gitleaks.io/">gitleaks</a> container on each push to scan for secrets. I&#39;m always afraid to publish secrets accidentally (it happened to me before).</p>

<p>Furthermore, I needed to establish some kind of build process, so I can also build images with Jenkins and directly push them to my local image registry. After some reading, I discarded the thought to use Kaniko because it still requires nodes in the native architecture of the respective build for <a href="https://github.com/GoogleContainerTools/kaniko?tab=readme-ov-file#creating-multi-arch-container-manifests-using-kaniko-and-manifest-tool">multi-architecture builds</a>.</p>

<p>Therefore, I followed RedHat <a href="https://developers.redhat.com/blog/2019/08/14/best-practices-for-running-buildah-in-a-container">best-practices</a> to integrate Buildah, which is also the tool I use to build multi-arch container images locally on my laptop.</p>

<p>If you are interested in more example code, here the link to some of the key components:</p>
<ul><li><a href="https://code.in0rdr.ch/nomad/file/hcl/default/jenkins/templates/jenkins.yaml.tmpl.html">jenkins.yaml.tmpl.html</a> Jenkins infrastructure code</li>
<li><a href="https://code.in0rdr.ch/nomad/file/hcl/default/jenkins/jenkins.nomad.html">jenkins.nomad.html</a> Jenkins nomad job</li>
<li><a href="https://code.in0rdr.ch/nomad/file/docker/docker-jenkins-inbound-agent/Dockerfile.html">docker-agent</a> Modified Jenkins inbound-agent with Docker and Buildah</li></ul>

<p>I&#39;m not finished with playing around, because I still haven&#39;t figured out some things fully yet (e.g., how to properly use <a href="https://community.jenkins.io/t/usage-of-jlink-in-jenkinsci-docker-agent/15456">jlink in multi-arch builds</a>) and because I&#39;m also curious <a href="https://discuss.hashicorp.com/t/jenkins-nomad-plugin/67020">how other members of the community use Jenkins on Nomad</a>.</p>

<div style="text-align:center; font-size: 0.8em">
<a href="https://write.in0rdr.ch/feed">🛜 RSS</a> | <a href="https://m.in0rdr.ch/in0rdr">🐘 Fediverse</a> | <a href="https://chat.in0rdr.ch/#/guest?join=p0c@conference.in0rdr.ch">💬 XMPP</a>
</div>
]]></content:encoded>
      <guid>https://write.in0rdr.ch/jenkins-works</guid>
      <pubDate>Sun, 09 Jun 2024 19:32:05 +0000</pubDate>
    </item>
  </channel>
</rss>