<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>Jerry of the Week</title>
    <link>https://write.in0rdr.ch/</link>
    <description>ˈdʒɛri - Individual who sends life against the grain no matter the consequences</description>
    <pubDate>Wed, 22 Apr 2026 12:31:30 +0000</pubDate>
    <item>
      <title>Parallel container builds with Jenkins scripted pipeline</title>
      <link>https://write.in0rdr.ch/parallel-container-builds-with-jenkins-scripted-pipeline</link>
      <description>&lt;![CDATA[Created a new pipeline for building my container images in the homelab. A dynamic choice populates a &#34;matrix&#34; that runs container build stages in parallel on runners with different architectures.&#xA;&#xA;#jenkins #containers #buildah&#xA;!--more--&#xA;&#xA;I was writing about how and why I use Jenkins last year.&#xA;&#xA;The basic idea behind the pipeline here is to build container images for multiple platforms/architectures (arm64 &amp; amd64 mainly).&#xA;&#xA;An advantage of using the runners native architecture (arm64/amd64) for the builds is that I don&#39;t need to use the QEMU emulation with buildah.&#xA;&#xA;Jenkinsfile with static nodes/stages &#xA;My previous Jenkinsfile (starting point) was very &#34;static&#34;. Static in a way, that the nodes were selected sequentially (first build on an arm64 node, then on a amd64 node) and I created two different tags for the different architectures:&#xA;@Library(&#39;in0rdr-jenkins-lib@master&#39;) &#xA;&#xA;def buildahbud = new BuildahBud(this)&#xA;def buildahpush = new BuildahPush(this)&#xA;def buildahmanifest = new BuildahManifest(this)&#xA;&#xA;node(&#39;podman&amp;&amp;arm64&#39;){&#xA;  checkout scm&#xA;&#xA;  // build with image context and name&#xA;  buildahbud.execute([:], &#34;docker/docker-snac&#34;, &#34;snac&#34;, &#34;2.90-arm64&#34;&#xA;    &#39;Dockerfile&#39;, &#39;arm64/v8&#39;)&#xA;  buildahpush.execute(&#34;snac&#34;, &#34;2.90-arm64&#34;)&#xA;}&#xA;&#xA;node(&#39;podman&amp;&amp;amd64&#39;){&#xA;  checkout scm&#xA;  buildahbud.execute([:], &#34;docker/docker-snac&#34;, &#34;snac&#34;, &#34;2.90-amd64&#34;,&#xA;    &#39;Dockerfile&#39;, &#39;amd64&#39;)&#xA;  buildahmanifest.create(&#39;haproxy.lan:5000/snac:2.90&#39;, [&#xA;    &#39;haproxy.lan:5000/snac:2.90-arm64&#39;,&#xA;    &#39;haproxy.lan:5000/snac:2.90-amd64&#39;&#xA;  ])&#xA;}&#xA;&#xA;You can find my Jenkins libraries here:&#xA;https://code.in0rdr.ch/jenkins-lib/files.html&#xA;&#xA;Pulling a different tag depending on the architecture is cumbersome. By creating a Manifest ([buildah manifest create](https://github.com/containers/buildah/blob/main/docs/buildah-manifest-create.1.md&#xA;)) I can reuse the same tag across multiple machine architectures (e.g., I can use haproxy.lan:5000/snac:2.90 on amd and arm machines):&#xA;buildah manifest inspect haproxy.lan:5000/snac:2.90&#xA;{&#xA;    &#34;schemaVersion&#34;: 2,&#xA;    &#34;mediaType&#34;: &#34;application/vnd.oci.image.index.v1+json&#34;,&#xA;    &#34;manifests&#34;: [&#xA;        {&#xA;            &#34;mediaType&#34;: &#34;application/vnd.oci.image.manifest.v1+json&#34;,&#xA;            &#34;digest&#34;: &#34;sha256:7f67daa2193f2ef9a84c3bcaa14c73d429165631cbbc6c30faf74ef69ac07d4a&#34;,&#xA;            &#34;size&#34;: 1207,&#xA;            &#34;platform&#34;: {&#xA;                &#34;architecture&#34;: &#34;arm64&#34;,&#xA;                &#34;os&#34;: &#34;linux&#34;,&#xA;                &#34;variant&#34;: &#34;v8&#34;&#xA;            }&#xA;        },&#xA;        {&#xA;            &#34;mediaType&#34;: &#34;application/vnd.oci.image.manifest.v1+json&#34;,&#xA;            &#34;digest&#34;: &#34;sha256:85b79a9014eed10f8b12cddd9f6fcf9a0e56cdf792b96cf4cde52c9c8f61c4f7&#34;,&#xA;            &#34;size&#34;: 1204,&#xA;            &#34;platform&#34;: {&#xA;                &#34;architecture&#34;: &#34;amd64&#34;,&#xA;                &#34;os&#34;: &#34;linux&#34;&#xA;            }&#xA;        }&#xA;    ]&#xA;}&#xA;&#xA;Dynamic choice, run stages on nodes in parallel&#xA;&#xA;I decided to change my static Jenkinsfile and spice it up with the dynamic choices based on the &#34;input matrix&#34;. The &#34;platform&#34; axis only holds the &#34;podman&#34; platform right now.&#xA;https://code.in0rdr.ch/jenkins-lib/file/src/BuildahParallelBuild.groovy.html&#xA;&#xA;buildahparallelbuild-choice.jpg&#xA;&#xA;For this, I took some inspiration from an older Jenkins blog post from 2019. The code example for a scripted pipeline with &#34;dynamic choices&#34; (choices based on predefined matrix axes) still works pretty fine.&#xA;&#xA;My inputs can now be kept rather simple, they simply describe which Dockerfile I want to build, from which path in the repo, with certain name &amp; tag (optionally some build arguments):&#xA;&#xA;@Library(&#39;in0rdr-jenkins-lib@master&#39;) &#xA;&#xA;def buildahParallelBuild = new BuildahParallelBuild(this)&#xA;&#xA;buildahParallelBuild.build(&#34;snac&#34;, &#34;2.90&#34;, &#34;docker/docker-snac&#34;)&#xA;&#xA;// Other example inputs:&#xA;//buildahParallelBuild.build(&#34;texlive&#34;, &#34;latest&#34;, &#34;docker/docker-texlive&#34;)&#xA;//buildahParallelBuild.build(&#34;updatecli&#34;, &#34;v0.114.0&#34;, &#34;docker/docker-updatecli&#34;)&#xA;//buildahParallelBuild.build(&#34;jenkins-inbound-agent&#34;, &#34;3355.v388858a47b33&#34;, &#34;docker/docker-jenkins-inbound-agent&#34;, [arg1: &#34;test&#34;, arg2: &#34;test2&#34;])&#xA;&#xA;buildahparallelbuild-pipeline.jpg&#xA;&#xA;It doesn&#39;t matter on which architecture I create the manifest.&#xA;&#xA;I&#39;m still a huge fan of the extensibility of these Jenkins libraries. By simply reusing my existing libraries for Buildah bud (class BuildahBud) / push (class BuildahPush) / manifest creation (class BuildahManifest) from my already working &#34;static&#34; use case, I could extend quickly with the &#34;parallel build&#34; functionality (in a new/separate class which simply imports the existing functionality) 🤗.&#xA;&#xA;Later I might figure out more about @NonCPS and NonCPS best practices.. hit me up if you would like to educate me 🤔&#xA;&#xA;Thanks for reading&#xA;&#xA;div style=&#34;text-align:center; font-size: 0.8em&#34;&#xD;&#xA;a href=&#34;https://write.in0rdr.ch/feed&#34;&amp;#128732; RSS/a | a href=&#34;https://m.in0rdr.ch/in0rdr&#34;&amp;#128024; Fediverse/a | a href=&#34;https://chat.in0rdr.ch/#/guest?join=p0c@conference.in0rdr.ch&#34;&amp;#128172; XMPP/a&#xD;&#xA;/div]]&gt;</description>
      <content:encoded><![CDATA[<p>Created a new pipeline for building my container images in the homelab. A <a href="https://www.jenkins.io/blog/2019/12/02/matrix-building-with-scripted-pipeline/#full-pipeline-example-with-dynamic-choices">dynamic choice</a> populates a “matrix” that runs container build stages in parallel on runners with different architectures.</p>

<p><a href="https://write.in0rdr.ch/tag:jenkins" class="hashtag"><span>#</span><span class="p-category">jenkins</span></a> <a href="https://write.in0rdr.ch/tag:containers" class="hashtag"><span>#</span><span class="p-category">containers</span></a> <a href="https://write.in0rdr.ch/tag:buildah" class="hashtag"><span>#</span><span class="p-category">buildah</span></a>
</p>

<p>I was writing about <a href="https://write.in0rdr.ch/jenkins-works">how and why I use Jenkins last year</a>.</p>

<p>The basic idea behind the pipeline here is to build container images for multiple platforms/architectures (arm64 &amp; amd64 mainly).</p>

<p>An advantage of using the runners native architecture (arm64/amd64) for the builds is that I don&#39;t need to use the <a href="https://github.com/containers/buildah/discussions/4736">QEMU emulation with buildah</a>.</p>

<h2 id="jenkinsfile-with-static-nodes-stages">Jenkinsfile with static nodes/stages</h2>

<p>My previous Jenkinsfile (starting point) was very “static”. Static in a way, that the nodes were selected sequentially (first build on an arm64 node, then on a amd64 node) and I created two different tags for the different architectures:</p>

<pre><code class="language-groovy">@Library(&#39;in0rdr-jenkins-lib@master&#39;) _

def buildahbud = new BuildahBud(this)
def buildahpush = new BuildahPush(this)
def buildahmanifest = new BuildahManifest(this)

node(&#39;podman&amp;&amp;arm64&#39;){
  checkout scm

  // build with image context and name
  buildahbud.execute([:], &#34;docker/docker-snac&#34;, &#34;snac&#34;, &#34;2.90-arm64&#34;
    &#39;Dockerfile&#39;, &#39;arm64/v8&#39;)
  buildahpush.execute(&#34;snac&#34;, &#34;2.90-arm64&#34;)
}

node(&#39;podman&amp;&amp;amd64&#39;){
  checkout scm
  buildahbud.execute([:], &#34;docker/docker-snac&#34;, &#34;snac&#34;, &#34;2.90-amd64&#34;,
    &#39;Dockerfile&#39;, &#39;amd64&#39;)
  buildahmanifest.create(&#39;haproxy.lan:5000/snac:2.90&#39;, [
    &#39;haproxy.lan:5000/snac:2.90-arm64&#39;,
    &#39;haproxy.lan:5000/snac:2.90-amd64&#39;
  ])
}
</code></pre>

<p>You can find my Jenkins libraries here:
* <a href="https://code.in0rdr.ch/jenkins-lib/files.html">https://code.in0rdr.ch/jenkins-lib/files.html</a></p>

<p>Pulling a different tag depending on the architecture is cumbersome. By creating a Manifest (<a href="https://github.com/containers/buildah/blob/main/docs/buildah-manifest-create.1.md"><code>buildah manifest create</code></a>) I can reuse the same tag across multiple machine architectures (e.g., I can use <code>haproxy.lan:5000/snac:2.90</code> on amd and arm machines):</p>

<pre><code class="language-bash">buildah manifest inspect haproxy.lan:5000/snac:2.90
</code></pre>

<pre><code class="language-json">{
    &#34;schemaVersion&#34;: 2,
    &#34;mediaType&#34;: &#34;application/vnd.oci.image.index.v1+json&#34;,
    &#34;manifests&#34;: [
        {
            &#34;mediaType&#34;: &#34;application/vnd.oci.image.manifest.v1+json&#34;,
            &#34;digest&#34;: &#34;sha256:7f67daa2193f2ef9a84c3bcaa14c73d429165631cbbc6c30faf74ef69ac07d4a&#34;,
            &#34;size&#34;: 1207,
            &#34;platform&#34;: {
                &#34;architecture&#34;: &#34;arm64&#34;,
                &#34;os&#34;: &#34;linux&#34;,
                &#34;variant&#34;: &#34;v8&#34;
            }
        },
        {
            &#34;mediaType&#34;: &#34;application/vnd.oci.image.manifest.v1+json&#34;,
            &#34;digest&#34;: &#34;sha256:85b79a9014eed10f8b12cddd9f6fcf9a0e56cdf792b96cf4cde52c9c8f61c4f7&#34;,
            &#34;size&#34;: 1204,
            &#34;platform&#34;: {
                &#34;architecture&#34;: &#34;amd64&#34;,
                &#34;os&#34;: &#34;linux&#34;
            }
        }
    ]
}
</code></pre>

<h2 id="dynamic-choice-run-stages-on-nodes-in-parallel">Dynamic choice, run stages on nodes in parallel</h2>

<p>I decided to change my static Jenkinsfile and spice it up with the dynamic choices based on the “input matrix”. The “platform” axis only holds the “podman” platform right now.
* <a href="https://code.in0rdr.ch/jenkins-lib/file/src/BuildahParallelBuild.groovy.html">https://code.in0rdr.ch/jenkins-lib/file/src/BuildahParallelBuild.groovy.html</a></p>

<p><img src="https://code.in0rdr.ch/pub/blog/buildahparallelbuild-choice.jpg" alt="buildahparallelbuild-choice.jpg"></p>

<p>For this, I took some inspiration from an older <a href="https://www.jenkins.io/blog/2019/12/02/matrix-building-with-scripted-pipeline/#full-pipeline-example-with-dynamic-choices">Jenkins blog post from 2019</a>. The code example for a scripted pipeline with “dynamic choices” (choices based on predefined matrix axes) still works pretty fine.</p>

<p>My inputs can now be kept rather simple, they simply describe which Dockerfile I want to build, from which path in the repo, with certain name &amp; tag (optionally some build arguments):</p>

<pre><code class="language-groovy">@Library(&#39;in0rdr-jenkins-lib@master&#39;) _

def buildahParallelBuild = new BuildahParallelBuild(this)

buildahParallelBuild.build(&#34;snac&#34;, &#34;2.90&#34;, &#34;docker/docker-snac&#34;)

// Other example inputs:
//buildahParallelBuild.build(&#34;texlive&#34;, &#34;latest&#34;, &#34;docker/docker-texlive&#34;)
//buildahParallelBuild.build(&#34;updatecli&#34;, &#34;v0.114.0&#34;, &#34;docker/docker-updatecli&#34;)
//buildahParallelBuild.build(&#34;jenkins-inbound-agent&#34;, &#34;3355.v388858a_47b_33&#34;, &#34;docker/docker-jenkins-inbound-agent&#34;, [arg1: &#34;test&#34;, arg2: &#34;test2&#34;])
</code></pre>

<p><img src="https://code.in0rdr.ch/pub/blog/buildahparallelbuild-pipeline.jpg" alt="buildahparallelbuild-pipeline.jpg"></p>

<p>It doesn&#39;t matter on which architecture I create the manifest.</p>

<p>I&#39;m still a huge fan of the extensibility of these Jenkins libraries. By simply reusing my existing libraries for Buildah bud (class <code>BuildahBud</code>) / push (class <code>BuildahPush</code>) / manifest creation (class <code>BuildahManifest</code>) from my already working “static” use case, I could extend quickly with the “parallel build” functionality (in a new/separate class which simply imports the existing functionality) 🤗.</p>

<p>Later I might figure out more about <a href="https://www.jenkins.io/doc/book/pipeline/cps-method-mismatches/"><code>@NonCPS</code></a> and <a href="https://www.jenkins.io/doc/book/pipeline/pipeline-best-practices/#using-noncps"><code>NonCPS</code> best practices</a>.. hit me up if you would like to educate me 🤔</p>

<p>Thanks for reading</p>

<div style="text-align:center; font-size: 0.8em">
<a href="https://write.in0rdr.ch/feed">🛜 RSS</a> | <a href="https://m.in0rdr.ch/in0rdr">🐘 Fediverse</a> | <a href="https://chat.in0rdr.ch/#/guest?join=p0c@conference.in0rdr.ch">💬 XMPP</a>
</div>
]]></content:encoded>
      <guid>https://write.in0rdr.ch/parallel-container-builds-with-jenkins-scripted-pipeline</guid>
      <pubDate>Wed, 18 Mar 2026 19:48:12 +0000</pubDate>
    </item>
    <item>
      <title>AppImages with Open Build Service (OBS)</title>
      <link>https://write.in0rdr.ch/appimages-with-open-build-service-obs</link>
      <description>&lt;![CDATA[How I built an AppImage for my hobby project using the Open Build Service (OBS).&#xA;&#xA;#linux #obs #appimage&#xA;!--more--&#xA;&#xA;Project setup from Template&#xA;&#xA;Start from the button &#34;New Image&#34; in the UI.&#xA;&#xA;obs-new-image.png&#xA;&#xA;This creates an entry in the &#34;Meta&#34; Settings of the Project. I first tried to follow the instructions by only referencing project=&#34;OBS:AppImage&#34;, but this will not work. The specific OpenSuse build version (project=&#34;OBS:AppImage:Templates:Leap:15.2&#34;) is required. The button &#34;New Image&#34; in the UI will produce a new &#34;Project&#34; with the proper &#34;Meta&#34; specification which you can copy/paste to the desired project (in my case &#34;home&#34;).&#xA;&#xA;Adjust the Meta of the project to use the Template for x8664 and aarch64 AppImages:&#xA;&#xA;  repository name=&#34;AppImage&#34;&#xA;    path project=&#34;OBS:AppImage:Templates:Leap:15.2&#34; repository=&#34;AppImage&#34;/&#xA;    archx8664/arch&#xA;    archaarch64/arch&#xA;  /repository&#xA;&#xA;appimage.yml&#xA;&#xA;Start crafting the appimage.yml file. The &#34;most simple example&#34; was useful as template.&#xA;&#xA;Builds from source can fetch from Git or from archives (e.g., .tar.gz). Because I publish the application in 2 flavors (&#34;nightly&#34; build and stable release), I built the &#34;nightly&#34; build from the latest commit from Git (master branch) and the &#34;stable&#34; release from the tar.gz sources.&#xA;&#xA;I needed to make install the application to DESTDIR $BUILDAPPDIR for it to be available in the resulting AppImage (see difference between source and build dir).&#xA;&#xA;.desktop file&#xA;&#xA;.desktop file and app icon are required in the build directory of the AppDir specification. I did not know that the .desktop file Exec directive does not work with absolute paths 😲.&#xA;&#xA;Also something I learned about .desktop files: &#34;The Icon= entry SHOULD NOT contain the file extension, the actual filename of the file, however, SHOULD carry the extension.&#34; (see AppDir specs).&#xA;&#xA;During few rebuilds, I noticed that newer versions of my .desktop file are not copied to the resulting AppImage. There was some sort of caching going on in the runner, which always included an earlier version of the file. Triggering a rebuild did not replace the file, only pruning the $BUILDSOURCEDIR with an extra build reset the runner and the next build had the updated version of the .desktop file included.&#xA;&#xA;  script:&#xA;  rm -rf $BUILDSOURCE_DIR/*&#xA;&#xA;Distributing as AppImage&#xA;&#xA;Also, I was wondering why the download page for the package does not show icons/buttons to download the AppImage. Because this options was missing, I followed the advice to include an extra link on the website of the project.&#xA;&#xA;div style=&#34;text-align:center; font-size: 0.8em&#34;&#xD;&#xA;a href=&#34;https://write.in0rdr.ch/feed&#34;&amp;#128732; RSS/a | a href=&#34;https://m.in0rdr.ch/in0rdr&#34;&amp;#128024; Fediverse/a | a href=&#34;https://chat.in0rdr.ch/#/guest?join=p0c@conference.in0rdr.ch&#34;&amp;#128172; XMPP/a&#xD;&#xA;/div]]&gt;</description>
      <content:encoded><![CDATA[<p>How I built an AppImage for my hobby project using the Open Build Service (OBS).</p>

<p><a href="https://write.in0rdr.ch/tag:linux" class="hashtag"><span>#</span><span class="p-category">linux</span></a> <a href="https://write.in0rdr.ch/tag:obs" class="hashtag"><span>#</span><span class="p-category">obs</span></a> <a href="https://write.in0rdr.ch/tag:appimage" class="hashtag"><span>#</span><span class="p-category">appimage</span></a>
</p>

<h2 id="project-setup-from-template">Project setup from Template</h2>

<p>Start from the button “New Image” in the UI.</p>

<p><img src="https://code.in0rdr.ch/pub/blog/obs-new-image.png" alt="obs-new-image.png"></p>

<p>This creates an entry in the “Meta” Settings of the Project. I first tried to follow the <a href="https://docs.appimage.org/packaging-guide/hosted-services/opensuse-build-service.html">instructions</a> by only referencing <code>project=&#34;OBS:AppImage&#34;</code>, but this will not work. The specific OpenSuse build version (<code>project=&#34;OBS:AppImage:Templates:Leap:15.2&#34;</code>) is required. The button “New Image” in the UI will produce a new “Project” with the proper “Meta” specification which you can copy/paste to the desired project (in my case “home”).</p>

<p>Adjust the Meta of the project to use the Template for <code>x86_64</code> and <code>aarch64</code> AppImages:</p>

<pre><code class="language-xml">  &lt;repository name=&#34;AppImage&#34;&gt;
    &lt;path project=&#34;OBS:AppImage:Templates:Leap:15.2&#34; repository=&#34;AppImage&#34;/&gt;
    &lt;arch&gt;x86_64&lt;/arch&gt;
    &lt;arch&gt;aarch64&lt;/arch&gt;
  &lt;/repository&gt;
</code></pre>

<h2 id="appimage-yml"><code>appimage.yml</code></h2>

<p>Start crafting the <a href="https://build.opensuse.org/projects/home:in0rdr/packages/diary/files/appimage.yml"><code>appimage.yml</code></a> file. The <a href="https://docs.appimage.org/packaging-guide/hosted-services/opensuse-build-service.html#most-simple-example">“most simple example”</a> was useful as template.</p>

<p>Builds from source can fetch from Git or from archives (e.g., <code>.tar.gz</code>). Because I publish the application in 2 flavors (“nightly” build and stable release), I built the “nightly” <a href="https://docs.appimage.org/packaging-guide/hosted-services/opensuse-build-service.html#simple-example-building-from-source">build from the latest commit from Git</a> (master branch) and the “stable” release from the <code>tar.gz</code> sources.</p>

<p>I needed to <code>make install</code> the application to <a href="https://www.chiark.greenend.org.uk/doc/make-doc/make.html/Makefile-Conventions.html#DESTDIR"><code>DESTDIR</code></a> <code>$BUILD_APPDIR</code> for it to be available in the resulting AppImage (see <a href="https://docs.appimage.org/packaging-guide/hosted-services/opensuse-build-service.html#appimage-yml-file">difference between source and build dir</a>).</p>

<h2 id="desktop-file"><code>.desktop</code> file</h2>

<p><a href="https://build.opensuse.org/projects/home:in0rdr/packages/diary/files/diary.desktop"><code>.desktop</code> file</a> and app icon are <a href="https://docs.appimage.org/reference/appdir.html#general-description">required</a> in the build directory of the AppDir specification. I did not know that the <code>.desktop</code> file <code>Exec</code> directive does not work with absolute paths 😲.</p>

<p>Also something I learned about <code>.desktop</code> files: “The <code>Icon=</code> entry SHOULD NOT contain the file extension, the actual filename of the file, however, SHOULD carry the extension.” (see <a href="https://docs.appimage.org/reference/appdir.html#general-description">AppDir specs</a>).</p>

<p>During few rebuilds, I noticed that newer versions of my <code>.desktop</code> file are not copied to the resulting AppImage. There was some sort of caching going on in the runner, which always included an earlier version of the file. Triggering a rebuild did not replace the file, only pruning the <code>$BUILD_SOURCE_DIR</code> with an extra build reset the runner and the next build had the updated version of the <code>.desktop</code> file included.</p>

<pre><code class="language-yaml">  script:
  - rm -rf $BUILD_SOURCE_DIR/*
</code></pre>

<h2 id="distributing-as-appimage">Distributing as AppImage</h2>

<p>Also, I was wondering why the <a href="https://software.opensuse.org/download.html?project=home%3Ain0rdr&amp;package=diary-nightly">download page</a> for the package does not show icons/buttons to download the AppImage. Because this options was missing, I followed the <a href="https://docs.appimage.org/packaging-guide/distribution.html#making-your-appimages-discoverable">advice</a> to include an extra link on the <a href="https://diary.p0c.ch">website of the project</a>.</p>

<div style="text-align:center; font-size: 0.8em">
<a href="https://write.in0rdr.ch/feed">🛜 RSS</a> | <a href="https://m.in0rdr.ch/in0rdr">🐘 Fediverse</a> | <a href="https://chat.in0rdr.ch/#/guest?join=p0c@conference.in0rdr.ch">💬 XMPP</a>
</div>
]]></content:encoded>
      <guid>https://write.in0rdr.ch/appimages-with-open-build-service-obs</guid>
      <pubDate>Tue, 10 Feb 2026 17:39:41 +0000</pubDate>
    </item>
    <item>
      <title>Nomad authentication with OpenBao</title>
      <link>https://write.in0rdr.ch/nomad-authentication-with-openbao</link>
      <description>&lt;![CDATA[I started to use OpenBao as OpenID connect provider to authenticate my Nomad home lab.&#xA;&#xA;#nomad #openbao #jenkins #homelab&#xA;!--more--&#xA;&#xA;I have two automatic/system jobs that require a Token.&#xA;&#xA;The backup cron job is taking Nomad snapshots in regular intervals. The snapshot API requires a management token and the Nomad policy capability for snapshots with the operator are not implemented yet.&#xA;The Jenkins server runs Nomad jobs using the Nomad cloud plugin. This system needs a Token to access the Nomad API (AppRole not compatible, see below).&#xA;&#xA;I still keep my bootstrapping token around just in case I ever need it. That&#39;s ok unlike to procedures with OpenBao root tokens..&#xA;&#xA;  The bootstrap token can be deleted and is like any other token, care should be taken to not revoke all management tokens.&#xA;&#xA;Name             Type        Global  Accessor ID  Expired&#xA;Bootstrap Token  management  true              false&#xA;Snapshot         management  false             false&#xA;OIDC-vault       client      true              false&#xA;Jenkins          client      false             false&#xA;&#xA;The auth method in Nomad is still called Vault, never mind..&#xA;&#xA;The access for human users is authenticated by an OIDC provider in my OpenBao server.&#xA;&#xA;Because I already had an identity and alias setup (userpass authentication) including a group, I only needed to configure the provider and the assignment to the group to allow authentication with the new provider.&#xA;&#xA;In the OpenBao server, the default provider and the allowall assignment cannot be deleted. I assume it is similar to the “master” realm in a Keycloak instance 🤔.&#xA;&#xA;bao-openidconnect-vault.svg&#xA;&#xA;I had to define a NOMADTOKEN as “Jenkins credential”, because the Nomad cloud plugin for Jenkins cannot read secrets from OpenBao using an AppRole (the Nomad jobs spawned by the plugin can do this, just not the plugin itself).&#xA;&#xA;When I type nomad login in the shell, the browser opens and I can authenticate with OpenBao. What could be improved is outputting the OIDC redirect URI in the terminal. This is helpful when you need to login from disconnected machines (i.e., not the shell on your local machine).&#xA;&#xA;div style=&#34;text-align:center; font-size: 0.8em&#34;&#xD;&#xA;a href=&#34;https://write.in0rdr.ch/feed&#34;&amp;#128732; RSS/a | a href=&#34;https://m.in0rdr.ch/in0rdr&#34;&amp;#128024; Fediverse/a | a href=&#34;https://chat.in0rdr.ch/#/guest?join=p0c@conference.in0rdr.ch&#34;&amp;#128172; XMPP/a&#xD;&#xA;/div]]&gt;</description>
      <content:encoded><![CDATA[<p>I started to use <a href="https://openbao.org/docs/secrets/identity/oidc-provider">OpenBao as OpenID connect provider</a> to authenticate my Nomad home lab.</p>

<p><a href="https://write.in0rdr.ch/tag:nomad" class="hashtag"><span>#</span><span class="p-category">nomad</span></a> <a href="https://write.in0rdr.ch/tag:openbao" class="hashtag"><span>#</span><span class="p-category">openbao</span></a> <a href="https://write.in0rdr.ch/tag:jenkins" class="hashtag"><span>#</span><span class="p-category">jenkins</span></a> <a href="https://write.in0rdr.ch/tag:homelab" class="hashtag"><span>#</span><span class="p-category">homelab</span></a>
</p>

<p>I have two automatic/system jobs that require a Token.</p>
<ul><li>The backup cron job is taking Nomad snapshots in regular intervals. The <a href="https://developer.hashicorp.com/nomad/api-docs/operator/snapshot">snapshot API</a> requires a management token and the <a href="https://github.com/hashicorp/nomad/issues/23614">Nomad policy capability for snapshots with the operator</a> are not implemented yet.</li>
<li>The Jenkins server runs Nomad jobs using the <a href="https://github.com/jenkinsci/nomad-plugin">Nomad cloud plugin</a>. This system needs a Token to access the Nomad API (AppRole not compatible, see below).</li></ul>

<p>I still keep my bootstrapping token around just in case I ever need it. That&#39;s <a href="https://developer.hashicorp.com/nomad/tutorials/archive/access-control-bootstrap">ok</a> unlike to procedures with OpenBao root tokens..</p>

<blockquote><p>The bootstrap token can be deleted and is like any other token, care should be taken to not revoke all management tokens.</p></blockquote>

<pre><code>Name             Type        Global  Accessor ID  Expired
Bootstrap Token  management  true    ***          false
Snapshot         management  false   ***          false
OIDC-vault       client      true    ***          false
Jenkins          client      false   ***          false
</code></pre>

<p>The auth method in Nomad is still called Vault, never mind..</p>

<p>The access for human users is authenticated by an <a href="https://developer.hashicorp.com/nomad/tutorials/archive/sso-oidc-vault">OIDC provider in my OpenBao server</a>.</p>

<p>Because I already had an identity and alias setup (userpass authentication) including a group, I only needed to configure the provider and the assignment to the group to allow authentication with the new provider.</p>

<p>In the OpenBao server, the <a href="https://openbao.org/docs/concepts/oidc-provider/#oidc-providers"><code>default</code> provider</a> and the <a href="https://openbao.org/docs/concepts/oidc-provider/#assignments"><code>allow_all</code> assignment</a> cannot be deleted. I assume it is similar to the “master” realm in a Keycloak instance 🤔.</p>

<p><img src="https://code.in0rdr.ch/pub/blog/bao-openidconnect-vault.svg" alt="bao-openidconnect-vault.svg"></p>

<p>I had to define a <code>NOMAD_TOKEN</code> as “Jenkins credential”, because the Nomad cloud plugin for Jenkins cannot read secrets from OpenBao using an AppRole (the Nomad jobs spawned by the plugin can do this, just not the plugin itself).</p>

<p>When I type <code>nomad login</code> in the shell, the browser opens and I can authenticate with OpenBao. What could be improved is outputting the OIDC redirect URI in the terminal. This is helpful when you need to login from disconnected machines (i.e., not the shell on your local machine).</p>

<div style="text-align:center; font-size: 0.8em">
<a href="https://write.in0rdr.ch/feed">🛜 RSS</a> | <a href="https://m.in0rdr.ch/in0rdr">🐘 Fediverse</a> | <a href="https://chat.in0rdr.ch/#/guest?join=p0c@conference.in0rdr.ch">💬 XMPP</a>
</div>
]]></content:encoded>
      <guid>https://write.in0rdr.ch/nomad-authentication-with-openbao</guid>
      <pubDate>Sat, 15 Nov 2025 20:22:29 +0000</pubDate>
    </item>
    <item>
      <title>Emulate Raspberry Pi4 on QEMU</title>
      <link>https://write.in0rdr.ch/emulate-raspberry-pi4-on-qemu</link>
      <description>&lt;![CDATA[I was looking into emulating the Raspberry Pi OS on QEMU. This short post summarizes my findings.&#xA;&#xA;#raspberry #homelab #debian #qemu&#xA;!--more--&#xA;&#xA;I needed a virtual host to test Ansible scripts for my Raspberry Pi in the home lab. I found my way around this task by reading through the many great posts and examples online.&#xA;&#xA;Extract kernel and device tree&#xA;&#xA;First, you have to get the image (in my case, I used a modified build of the bookworm image) and extract the &#34;kernel&#34; and the &#34;device tree&#34; (to be honest, this was new for me when I read about this today).&#xA;&#xA;We do this by mounting the boot partition and extracting the relevant files.&#xA;&#xA;Check image partitions&#xA;$ fdisk -l ./HashiPi-pi0.img&#xA;&#xA;Setup loop device with image, scan partitions. This is easier than fiddling with fdisk and partition offsets for mounting.&#xA;&#xA;$ sudo losetup -P /dev/loop0 HashiPi-pi0.img&#xA;$ sudo mount /dev/loop0p1 /mnt&#xA;&#xA;copy kernel and device tree binary (dtb)&#xA;$ cp /mnt/kernel8.img .&#xA;$ cp /mnt/bcm2711-rpi-4-b.dtb .&#xA;&#xA;unmount&#xA;$ sudo umount /mnt&#xA;$ sudo losetup --detach /dev/loop0&#xA;&#xA;Patch device tree to enable USB controller&#xA;&#xA;The &#34;device tree binary&#34; (.dtb) needs to be translated into a readable &#34;device tree source&#34; (.dts) file.&#xA;&#xA;$ dtc -I dtb -O dts -o bcm2711-rpi-4-b.dts bcm2711-rpi-4-b.dtb&#xA;&#xA;The .dts file can be patched to enable the usb controller. This is required, if we want to boot later using the usbnet device and port-forward (hostfwd) the ssh port.&#xA;&#xA;--- bcm2711-rpi-4-b.dts.orig    2025-09-21 15:05:59.304575294 +0200&#xA;+++ bcm2711-rpi-4-b.dts 2025-09-21 15:04:56.709581742 +0200&#xA;@@ -1450,7 +1450,7 @@&#xA;                        phy-names = &#34;usb2-phy&#34;;&#xA;                        interrupt-names = &#34;usb&#34;, &#34;soft&#34;;&#xA;                        power-domains = 0x10 0x06;&#xA;status = &#34;disabled&#34;;&#xA;status = &#34;okay&#34;;&#xA;                        phandle = 0xbf;&#xA;                };&#xA;&#xA;Unfortunately, we need to use this usb device, because all other emulated network devices are pci based which is not supported by QEMU for the Raspberry Pi&#xA;&#xA;$ qemu-system-aarch64 -device help&#xA;...&#xA;Network devices:&#xA;name &#34;e1000&#34;, bus PCI, alias &#34;e1000-82540em&#34;, desc &#34;Intel Gigabit Ethernet&#34;&#xA;name &#34;e1000-82544gc&#34;, bus PCI, desc &#34;Intel Gigabit Ethernet&#34;&#xA;name &#34;e1000-82545em&#34;, bus PCI, desc &#34;Intel Gigabit Ethernet&#34;&#xA;name &#34;e1000e&#34;, bus PCI, desc &#34;Intel 82574L GbE Controller&#34;&#xA;name &#34;i82550&#34;, bus PCI, desc &#34;Intel i82550 Ethernet&#34;&#xA;name &#34;i82551&#34;, bus PCI, desc &#34;Intel i82551 Ethernet&#34;&#xA;name &#34;i82557a&#34;, bus PCI, desc &#34;Intel i82557A Ethernet&#34;&#xA;name &#34;i82557b&#34;, bus PCI, desc &#34;Intel i82557B Ethernet&#34;&#xA;name &#34;i82557c&#34;, bus PCI, desc &#34;Intel i82557C Ethernet&#34;&#xA;name &#34;i82558a&#34;, bus PCI, desc &#34;Intel i82558A Ethernet&#34;&#xA;name &#34;i82558b&#34;, bus PCI, desc &#34;Intel i82558B Ethernet&#34;&#xA;name &#34;i82559a&#34;, bus PCI, desc &#34;Intel i82559A Ethernet&#34;&#xA;name &#34;i82559b&#34;, bus PCI, desc &#34;Intel i82559B Ethernet&#34;&#xA;name &#34;i82559c&#34;, bus PCI, desc &#34;Intel i82559C Ethernet&#34;&#xA;name &#34;i82559er&#34;, bus PCI, desc &#34;Intel i82559ER Ethernet&#34;&#xA;name &#34;i82562&#34;, bus PCI, desc &#34;Intel i82562 Ethernet&#34;&#xA;name &#34;i82801&#34;, bus PCI, desc &#34;Intel i82801 Ethernet&#34;&#xA;name &#34;igb&#34;, bus PCI, desc &#34;Intel 82576 Gigabit Ethernet Controller&#34;&#xA;name &#34;ne2kpci&#34;, bus PCI&#xA;name &#34;pcnet&#34;, bus PCI&#xA;name &#34;rocker&#34;, bus PCI, desc &#34;Rocker Switch&#34;&#xA;name &#34;rtl8139&#34;, bus PCI&#xA;name &#34;tulip&#34;, bus PCI&#xA;-  name &#34;usb-net&#34;, bus usb-bus&#xA;name &#34;virtio-net-device&#34;, bus virtio-bus # No &#39;virtio-bus&#39; bus found for device &#39;virtio-net-device&#39;&#xA;name &#34;virtio-net-pci&#34;, bus PCI, alias &#34;virtio-net&#34;&#xA;name &#34;virtio-net-pci-non-transitional&#34;, bus PCI&#xA;name &#34;virtio-net-pci-transitional&#34;, bus PCI&#xA;name &#34;vmxnet3&#34;, bus PCI, desc &#34;VMWare Paravirtualized Ethernet v3&#34;&#xA;&#xA;We really need the usb device (virtio-net-device does not work either I tried), that&#39;s the reason for the patch.&#xA;&#xA;A comment of the &#34;interrupt.memfault blog&#34; describes it nicely (very helpful community):&#xA;&#xA;  Note that for raspi4, the bcm2711-rpi-4-b.dtb devicetree file has disabled the USB controller.&#xA;So, to enable USB keyboard &amp; mouse, the .dtb file must be decompiled to .dts,&#xA;patched, and recompiled back to .dtb.&#xA;&#xA;Unfortunately, it&#39;s not only needed for keyboard and mouse, but also to make our network adapter work (for ssh port-forwarding).&#xA;&#xA;Thaa patching.. Then recompiling into binary form with dtc:&#xA;$ dtc -I dtb -O dts -o bcm2711-rpi-4-b.dts bcm2711-rpi-4-b.dtb&#xA;&#xA;Would be really nice to have this usb controller enabled by default in the next Trixie Pi OS 🤞&#xA;&#xA;Boot the image&#xA;&#xA;For booting the image, I had to ensure two things for the kernel arguments for the Raspberry 4:&#xA;&#xA;Use the ttyAMA1 console&#xA;Use the root partition mmcblk1p2&#xA;&#xA;Note that I&#39;m not enabling the keyboard &amp; mouse devices (as suggested in the references online), because I&#39;m mainly interested to connect to the machine remotely via the ssh port fowarding:&#xA;&#xA;$ sudo qemu-system-aarch64 \&#xA;  -machine raspi4b -cpu cortex-a72 \&#xA;  -dtb bcm2711-rpi-4-b-mod.dtb \ # use mod device tree&#xA;  -m 2G -smp 4 \&#xA;  -kernel kernel8.img -sd HashiPi-pi0.img \&#xA;  -append &#34;rw earlyprintk loglevel=8 console=ttyAMA1,115200 dwcotg.lpm_enable=0 root=/dev/mmcblk1p2 rootdelay=1&#34; \&#xA;  -device usb-net,netdev=net0 \&#xA;  -netdev user,id=net0,hostfwd=tcp::2222-:22&#xA;&#xA;That&#39;s it. Is still a bit slow, but good enough to throw some Ansible against the wall and see if it sticks.&#xA;&#xA;My idea here is to do less with the HashiCorp packer scripts and go back to more Ansible, because that might be more sustainable in the long run.. Let&#39;s see. Next I&#39;m probably also going to give the Trixie (nightly image) a try.&#xA;&#xA;References&#xA;&#xA;https://interrupt.memfault.com/blog/emulating-raspberry-pi-in-qemu&#xA;https://www.qemu.org/docs/master/system/qemu-manpage.html&#xA;https://www.qemu.org/docs/master/system/arm/raspi.html&#xA;https://github.com/trinitronx/qemu-raspbian/blob/main/run-raspi4.sh&#xA;https://github.com/trinitronx/qemu-raspbian/blob/main/bcm2711-rpi-4-b.dts.patch&#xA;https://downloads.raspberrypi.org&#xA;https://www.man7.org/linux/man-pages/man8/losetup.8.html&#xA;&#xA;div style=&#34;text-align:center; font-size: 0.8em&#34;&#xD;&#xA;a href=&#34;https://write.in0rdr.ch/feed&#34;&amp;#128732; RSS/a | a href=&#34;https://m.in0rdr.ch/in0rdr&#34;&amp;#128024; Fediverse/a | a href=&#34;https://chat.in0rdr.ch/#/guest?join=p0c@conference.in0rdr.ch&#34;&amp;#128172; XMPP/a&#xD;&#xA;/div]]&gt;</description>
      <content:encoded><![CDATA[<p>I was looking into emulating the Raspberry Pi OS on QEMU. This short post summarizes my findings.</p>

<p><a href="https://write.in0rdr.ch/tag:raspberry" class="hashtag"><span>#</span><span class="p-category">raspberry</span></a> <a href="https://write.in0rdr.ch/tag:homelab" class="hashtag"><span>#</span><span class="p-category">homelab</span></a> <a href="https://write.in0rdr.ch/tag:debian" class="hashtag"><span>#</span><span class="p-category">debian</span></a> <a href="https://write.in0rdr.ch/tag:qemu" class="hashtag"><span>#</span><span class="p-category">qemu</span></a>
</p>

<p>I needed a virtual host to test Ansible scripts for my Raspberry Pi in the home lab. I found my way around this task by reading through the many great posts and examples online.</p>

<h2 id="extract-kernel-and-device-tree">Extract kernel and device tree</h2>

<p>First, you have to get the <a href="https://downloads.raspberrypi.org">image</a> (in my case, I used a modified build of the bookworm image) and extract the “kernel” and the <a href="https://www.kernel.org/doc/html/latest/devicetree/usage-model.html">“device tree”</a> (to be honest, this was new for me when I read about this today).</p>

<p>We do this by mounting the boot partition and extracting the relevant files.</p>

<pre><code class="language-bash"># Check image partitions
$ fdisk -l ./HashiPi-pi0.img
</code></pre>

<p>Setup loop device with image, scan partitions. This is easier than fiddling with fdisk and partition offsets for mounting.</p>

<pre><code class="language-bash">$ sudo losetup -P /dev/loop0 HashiPi-pi0.img
$ sudo mount /dev/loop0p1 /mnt

# copy kernel and device tree binary (dtb)
$ cp /mnt/kernel8.img .
$ cp /mnt/bcm2711-rpi-4-b.dtb .

# unmount
$ sudo umount /mnt
$ sudo losetup --detach /dev/loop0
</code></pre>

<h2 id="patch-device-tree-to-enable-usb-controller">Patch device tree to enable USB controller</h2>

<p>The “device tree binary” (.dtb) needs to be translated into a readable <a href="https://www.kernel.org/doc/html/latest/devicetree/bindings/dts-coding-style.html">“device tree source” (.dts)</a> file.</p>

<pre><code class="language-bash">$ dtc -I dtb -O dts -o bcm2711-rpi-4-b.dts bcm2711-rpi-4-b.dtb
</code></pre>

<p>The .dts file can be <a href="https://github.com/trinitronx/qemu-raspbian">patched</a> to enable the usb controller. This is required, if we want to boot later using the <a href="https://en.wikipedia.org/wiki/Ethernet_over_USB">usbnet</a> device and port-forward (<a href="https://www.qemu.org/docs/master/system/qemu-manpage.html"><code>hostfwd</code></a>) the ssh port.</p>

<pre><code class="language-diff">--- bcm2711-rpi-4-b.dts.orig    2025-09-21 15:05:59.304575294 +0200
+++ bcm2711-rpi-4-b.dts 2025-09-21 15:04:56.709581742 +0200
@@ -1450,7 +1450,7 @@
                        phy-names = &#34;usb2-phy&#34;;
                        interrupt-names = &#34;usb&#34;, &#34;soft&#34;;
                        power-domains = &lt;0x10 0x06&gt;;
-                       status = &#34;disabled&#34;;
+                       status = &#34;okay&#34;;
                        phandle = &lt;0xbf&gt;;
                };
</code></pre>

<p>Unfortunately, we need to use this usb device, because all other emulated network devices are pci based which is <a href="https://www.qemu.org/docs/master/system/arm/raspi.html">not supported by QEMU for the Raspberry Pi</a></p>

<pre><code class="language-bash">$ qemu-system-aarch64 -device help
...
Network devices:
name &#34;e1000&#34;, bus PCI, alias &#34;e1000-82540em&#34;, desc &#34;Intel Gigabit Ethernet&#34;
name &#34;e1000-82544gc&#34;, bus PCI, desc &#34;Intel Gigabit Ethernet&#34;
name &#34;e1000-82545em&#34;, bus PCI, desc &#34;Intel Gigabit Ethernet&#34;
name &#34;e1000e&#34;, bus PCI, desc &#34;Intel 82574L GbE Controller&#34;
name &#34;i82550&#34;, bus PCI, desc &#34;Intel i82550 Ethernet&#34;
name &#34;i82551&#34;, bus PCI, desc &#34;Intel i82551 Ethernet&#34;
name &#34;i82557a&#34;, bus PCI, desc &#34;Intel i82557A Ethernet&#34;
name &#34;i82557b&#34;, bus PCI, desc &#34;Intel i82557B Ethernet&#34;
name &#34;i82557c&#34;, bus PCI, desc &#34;Intel i82557C Ethernet&#34;
name &#34;i82558a&#34;, bus PCI, desc &#34;Intel i82558A Ethernet&#34;
name &#34;i82558b&#34;, bus PCI, desc &#34;Intel i82558B Ethernet&#34;
name &#34;i82559a&#34;, bus PCI, desc &#34;Intel i82559A Ethernet&#34;
name &#34;i82559b&#34;, bus PCI, desc &#34;Intel i82559B Ethernet&#34;
name &#34;i82559c&#34;, bus PCI, desc &#34;Intel i82559C Ethernet&#34;
name &#34;i82559er&#34;, bus PCI, desc &#34;Intel i82559ER Ethernet&#34;
name &#34;i82562&#34;, bus PCI, desc &#34;Intel i82562 Ethernet&#34;
name &#34;i82801&#34;, bus PCI, desc &#34;Intel i82801 Ethernet&#34;
name &#34;igb&#34;, bus PCI, desc &#34;Intel 82576 Gigabit Ethernet Controller&#34;
name &#34;ne2k_pci&#34;, bus PCI
name &#34;pcnet&#34;, bus PCI
name &#34;rocker&#34;, bus PCI, desc &#34;Rocker Switch&#34;
name &#34;rtl8139&#34;, bus PCI
name &#34;tulip&#34;, bus PCI
-&gt; name &#34;usb-net&#34;, bus usb-bus
name &#34;virtio-net-device&#34;, bus virtio-bus # No &#39;virtio-bus&#39; bus found for device &#39;virtio-net-device&#39;
name &#34;virtio-net-pci&#34;, bus PCI, alias &#34;virtio-net&#34;
name &#34;virtio-net-pci-non-transitional&#34;, bus PCI
name &#34;virtio-net-pci-transitional&#34;, bus PCI
name &#34;vmxnet3&#34;, bus PCI, desc &#34;VMWare Paravirtualized Ethernet v3&#34;
</code></pre>

<p>We really need the usb device (virtio-net-device does not work either I tried), that&#39;s the reason for the patch.</p>

<p>A <a href="https://community.memfault.com/t/emulating-a-raspberry-pi-in-qemu-interrupt/684/10">comment</a> of the “interrupt.memfault blog” describes it nicely (very helpful community):</p>

<blockquote><p>Note that for raspi4, the bcm2711-rpi-4-b.dtb devicetree file has disabled the USB controller.
So, to enable USB keyboard &amp; mouse, the .dtb file must be decompiled to .dts,
patched, and recompiled back to .dtb.</p></blockquote>

<p>Unfortunately, it&#39;s not only needed for keyboard and mouse, but also to make our network adapter work (for ssh port-forwarding).</p>

<p>Thaa patching.. Then recompiling into binary form with <code>dtc</code>:</p>

<pre><code class="language-bash">$ dtc -I dtb -O dts -o bcm2711-rpi-4-b.dts bcm2711-rpi-4-b.dtb
</code></pre>

<p>Would be really nice to have this usb controller enabled by default in the next Trixie Pi OS 🤞</p>

<h2 id="boot-the-image">Boot the image</h2>

<p>For booting the image, I had to ensure two things for the kernel arguments for the Raspberry 4:</p>
<ul><li>Use the <code>ttyAMA1</code> console</li>
<li>Use the root partition <code>mmcblk1p2</code></li></ul>

<p>Note that I&#39;m not enabling the keyboard &amp; mouse devices (as suggested in the references online), because I&#39;m mainly interested to connect to the machine remotely via the ssh port fowarding:</p>

<pre><code class="language-bash">$ sudo qemu-system-aarch64 \
  -machine raspi4b -cpu cortex-a72 \
  -dtb bcm2711-rpi-4-b-mod.dtb \ # use mod device tree
  -m 2G -smp 4 \
  -kernel kernel8.img -sd HashiPi-pi0.img \
  -append &#34;rw earlyprintk loglevel=8 console=ttyAMA1,115200 dwc_otg.lpm_enable=0 root=/dev/mmcblk1p2 rootdelay=1&#34; \
  -device usb-net,netdev=net0 \
  -netdev user,id=net0,hostfwd=tcp::2222-:22
</code></pre>

<p>That&#39;s it. Is still a bit slow, but good enough to throw some Ansible against the wall and see if it sticks.</p>

<p>My idea here is to do less with the HashiCorp packer scripts and go back to more Ansible, because that might be more sustainable in the long run.. Let&#39;s see. Next I&#39;m probably also going to give the Trixie (<a href="https://downloads.raspberrypi.org/nightlies/">nightly</a> image) a try.</p>

<h2 id="references">References</h2>
<ul><li><a href="https://interrupt.memfault.com/blog/emulating-raspberry-pi-in-qemu">https://interrupt.memfault.com/blog/emulating-raspberry-pi-in-qemu</a></li>
<li><a href="https://www.qemu.org/docs/master/system/qemu-manpage.html">https://www.qemu.org/docs/master/system/qemu-manpage.html</a></li>
<li><a href="https://www.qemu.org/docs/master/system/arm/raspi.html">https://www.qemu.org/docs/master/system/arm/raspi.html</a></li>
<li><a href="https://github.com/trinitronx/qemu-raspbian/blob/main/run-raspi4.sh">https://github.com/trinitronx/qemu-raspbian/blob/main/run-raspi4.sh</a></li>
<li><a href="https://github.com/trinitronx/qemu-raspbian/blob/main/bcm2711-rpi-4-b.dts.patch">https://github.com/trinitronx/qemu-raspbian/blob/main/bcm2711-rpi-4-b.dts.patch</a></li>
<li><a href="https://downloads.raspberrypi.org">https://downloads.raspberrypi.org</a></li>
<li><a href="https://www.man7.org/linux/man-pages/man8/losetup.8.html">https://www.man7.org/linux/man-pages/man8/losetup.8.html</a></li></ul>

<div style="text-align:center; font-size: 0.8em">
<a href="https://write.in0rdr.ch/feed">🛜 RSS</a> | <a href="https://m.in0rdr.ch/in0rdr">🐘 Fediverse</a> | <a href="https://chat.in0rdr.ch/#/guest?join=p0c@conference.in0rdr.ch">💬 XMPP</a>
</div>
]]></content:encoded>
      <guid>https://write.in0rdr.ch/emulate-raspberry-pi4-on-qemu</guid>
      <pubDate>Sun, 21 Sep 2025 13:20:54 +0000</pubDate>
    </item>
    <item>
      <title>Rootless Podman storage driver</title>
      <link>https://write.in0rdr.ch/rootless-podman-storage-driver</link>
      <description>&lt;![CDATA[I was breaking my head over a very weird Jenkins error lately. This post goes in depth on the default storage driver (VFS) for rootless Podman.&#xA;&#xA;#podman #jenksin #cicd&#xA;!--more--&#xA;&#xA;My Jenkins setup in the homelab&#xA;Recently I wanted to setup a LaTex pipeline in my Jenkins. During this task I noticed, that certain images fail with a particular error message, that was not meaningful at all (in hindsight).&#xA;&#xA;The Nomad cloud plugin for Jenkins starts Jenkins workers as Nomad jobs. These jobs are started from an image that includes the Jenkins inbound agent. This agent connects to the Jenkins server.&#xA;&#xA;Alongside the Jenkins agent, I also install Buildah and Docker on that image, because I use these Jenkins workers to build container images.&#xA;&#xA;Besides building images (with Buildah), I also use the Docker workflow to run arbitrary container images in my Jenkins pipelines.&#xA;&#xA;The agent image mounts the Podman socket of the Nomad node. It does not mount the socket of the &#34;root&#34; Podman process, only the rootless socket of a particular jenkins user on the nodes.&#xA;&#xA;I reserved this users on the nodes (my Raspberry Pies) for this particular purpose.&#xA;&#xA;In that sense, whenever I run an image with the Docker workflow plugin in my Jenkins pipeline, it is started as rootless Podman container under that jenkins user on the Nomad node where the pipeline worker is scheduled.&#xA;&#xA;VFS default storage driver for rootless Podman&#xA;I only noticed today, that even though I have a more recent kernel than 5.12.9 (I run 6.12.41), the rootless Podman configuration does not check the kernel version but simply defaults to the VFS storage.&#xA;&#xA;  The default storage driver for UID 0 is configured in containers-storage.conf(5) in rootless mode), and is vfs for non-root users when fuse-overlayfs is not available.&#xA;&#xA;I recently uninstall fuse-overlayfs from my Pies, because I run a recent kernel 5.12.9 that supports native overlayfs.&#xA;&#xA;Improve performance for Docker workflow in Jenkins&#xA;In order to achieve better performance and fix the weird and unclear error in my Jenkins pipeline that use Docker workflow, I had to migrate to overlayfs storage driver for the rootless Podman socket (of the jenkins user).&#xA;&#xA;As jenkins user on the nodes, cleanup the old vfs storage:&#xA;$ podman system reset&#xA;&#xA;Also had to help a bit (as root):&#xA;rm -rf /home/jenkins/.local/share/containers/&#xA;&#xA;Then change the the storage driver setting (I changed it globally in /etc/containers/storage.conf because my root Podman process already runs with overlayfs storage driver anyways, so it&#39;s basically a global default now).&#xA;&#xA;Configure overlay storage for Podman&#xA;cat &lt;EOF  /etc/containers/storage.conf&#xA;[storage]&#xA;driver=&#34;overlay&#34;&#xA;runroot = &#34;/var/run/containers/storage&#34;&#xA;graphroot = &#34;/var/lib/containers/storage&#34;&#xA;EOF&#xA;&#xA;This explicit setting now also holds for the rootless Podman socket. I was not aware of that. It can be checked with the jenkins user (rootless Podman):&#xA;&#xA;$ podman info | grep graph&#xA;  graphDriverName: overlay&#xA;&#xA;Really glad I got this fixed.&#xA;&#xA;Let me know your Podman/Buildah/Jenkins stories on the Fediverse or via chat.&#xA;&#xA;div style=&#34;text-align:center; font-size: 0.8em&#34;&#xD;&#xA;a href=&#34;https://write.in0rdr.ch/feed&#34;&amp;#128732; RSS/a | a href=&#34;https://m.in0rdr.ch/in0rdr&#34;&amp;#128024; Fediverse/a | a href=&#34;https://chat.in0rdr.ch/#/guest?join=p0c@conference.in0rdr.ch&#34;&amp;#128172; XMPP/a&#xD;&#xA;/div]]&gt;</description>
      <content:encoded><![CDATA[<p>I was breaking my head over a <a href="https://community.jenkins.io/t/interruptedexception-with-docker-worfklow-plugin-and-large-images/35550">very weird Jenkins error</a> lately. This post goes in depth on the default storage driver (VFS) for rootless Podman.</p>

<p><a href="https://write.in0rdr.ch/tag:podman" class="hashtag"><span>#</span><span class="p-category">podman</span></a> <a href="https://write.in0rdr.ch/tag:jenksin" class="hashtag"><span>#</span><span class="p-category">jenksin</span></a> <a href="https://write.in0rdr.ch/tag:cicd" class="hashtag"><span>#</span><span class="p-category">cicd</span></a>
</p>

<h2 id="my-jenkins-setup-in-the-homelab">My Jenkins setup in the homelab</h2>

<p>Recently I wanted to setup a LaTex pipeline in my Jenkins. During this task I noticed, that certain images fail with a <a href="https://community.jenkins.io/t/interruptedexception-with-docker-worfklow-plugin-and-large-images/35550">particular error message</a>, that was not meaningful at all (in hindsight).</p>

<p>The <a href="https://github.com/jenkinsci/nomad-plugin">Nomad cloud plugin</a> for Jenkins starts Jenkins workers as Nomad jobs. These jobs are started from an image that includes the <a href="https://github.com/jenkinsci/remoting/tree/master">Jenkins inbound agent</a>. This agent connects to the Jenkins server.</p>

<p>Alongside the Jenkins agent, I <a href="https://code.in0rdr.ch/nomad/file/docker/docker-jenkins-inbound-agent/Dockerfile.html">also install Buildah and Docker</a> on that image, because I use these Jenkins workers to build container images.</p>

<p>Besides building images (with Buildah), I also use the <a href="https://github.com/jenkinsci/docker-workflow-plugin/tree/master">Docker workflow</a> to run arbitrary container images in my Jenkins pipelines.</p>

<p>The agent image <a href="https://code.in0rdr.ch/nomad/file/hcl/default/jenkins/templates/jenkins.yaml.tmpl.html">mounts the Podman socket of the Nomad node</a>. It does not mount the socket of the “root” Podman process, only the rootless socket of a particular <code>jenkins</code> user on the nodes.</p>

<p>I reserved this users on the nodes (my Raspberry Pies) for this particular purpose.</p>

<p>In that sense, whenever I run an image with the Docker workflow plugin in my Jenkins pipeline, it is started as rootless Podman container under that <code>jenkins</code> user on the Nomad node where the pipeline worker is scheduled.</p>

<h2 id="vfs-default-storage-driver-for-rootless-podman">VFS default storage driver for rootless Podman</h2>

<p>I only noticed today, that even though I have a <a href="https://docs.podman.io/en/latest/markdown/podman.1.html#note-unsupported-file-systems-in-rootless-mode">more recent kernel than 5.12.9</a> (I run 6.12.41), the rootless Podman configuration does not check the kernel version but simply <a href="https://docs.podman.io/en/latest/markdown/podman.1.html#storage-driver-value">defaults to the VFS storage</a>.</p>

<blockquote><p>The default storage driver for UID 0 is configured in containers-storage.conf(5) in rootless mode), and is vfs for non-root users when fuse-overlayfs is not available.</p></blockquote>

<p>I recently uninstall fuse-overlayfs from my Pies, because I run a recent kernel 5.12.9 that supports <a href="https://www.redhat.com/en/blog/podman-rootless-overlay">native overlayfs</a>.</p>

<h2 id="improve-performance-for-docker-workflow-in-jenkins">Improve performance for Docker workflow in Jenkins</h2>

<p>In order to <a href="https://github.com/containers/podman/blob/main/docs/tutorials/performance.md#choosing-a-storage-driver">achieve better performance</a> and fix <a href="https://community.jenkins.io/t/interruptedexception-with-docker-worfklow-plugin-and-large-images/35550">the weird and unclear error</a> in my Jenkins pipeline that use Docker workflow, I had to <a href="https://docs.podman.io/en/latest/markdown/podman-system-reset.1.html#switching-rootless-user-from-vfs-driver-to-overlay-with-fuse-overlayfs">migrate</a> to overlayfs storage driver for the rootless Podman socket (of the <code>jenkins</code> user).</p>

<p>As <code>jenkins</code> user on the nodes, cleanup the old vfs storage:</p>

<pre><code class="language-bash">$ podman system reset
</code></pre>

<p>Also had to help a bit (as root):</p>

<pre><code class="language-bash"># rm -rf /home/jenkins/.local/share/containers/
</code></pre>

<p>Then change the the storage driver setting (I changed it globally in <code>/etc/containers/storage.conf</code> because my root Podman process already runs with overlayfs storage driver anyways, so it&#39;s basically a global default now).</p>

<pre><code class="language-bash"># Configure overlay storage for Podman
cat &lt;&lt;EOF &gt; /etc/containers/storage.conf
[storage]
driver=&#34;overlay&#34;
runroot = &#34;/var/run/containers/storage&#34;
graphroot = &#34;/var/lib/containers/storage&#34;
EOF
</code></pre>

<p>This <strong>explicit setting</strong> now also holds for the rootless Podman socket. I was not aware of that. It can be checked with the <code>jenkins</code> user (rootless Podman):</p>

<pre><code class="language-bash">$ podman info | grep graph
  graphDriverName: overlay
</code></pre>

<p>Really glad I got this fixed.</p>

<p>Let me know your Podman/Buildah/Jenkins stories on the Fediverse or via chat.</p>

<div style="text-align:center; font-size: 0.8em">
<a href="https://write.in0rdr.ch/feed">🛜 RSS</a> | <a href="https://m.in0rdr.ch/in0rdr">🐘 Fediverse</a> | <a href="https://chat.in0rdr.ch/#/guest?join=p0c@conference.in0rdr.ch">💬 XMPP</a>
</div>
]]></content:encoded>
      <guid>https://write.in0rdr.ch/rootless-podman-storage-driver</guid>
      <pubDate>Sat, 06 Sep 2025 19:33:21 +0000</pubDate>
    </item>
    <item>
      <title>Docker pull through HAProxy</title>
      <link>https://write.in0rdr.ch/docker-pull-through-haproxy</link>
      <description>&lt;![CDATA[This is a story about pulling Docker images through HAProxy in my home lab.&#xA;&#xA;#selfhosting #homelab #docker #haproxy&#xA;!--more--&#xA;&#xA;I observed an interesting issue in my Jenkins pipeline. The image pull aborted with the following error message:&#xA;&#xA;Error: writing blob: storing blob to file &#34;/var/tmp/storage1360560957/1&#34;: happened during read: unexpected EOF&#xA;&#xA;First I thought it has something to do with the storage. But I was wrong. The culprit was the network.&#xA;&#xA;More specifically, I noticed that pulling through my HAProxy instance was the issue, but pulling through the nodes registry port (directly) was fine.&#xA;&#xA;When looking into the HAProxy logs, I noticed that the requests fail with the particular error flags cD:&#xA;&#xA;Sep 02 02:23:29 haproxy haproxy[2836]: 10.0.0.102:34982 [02/Sep/2025:02:22:47.563] registryfront registry/pi3 0/0/0/87/42156 200 466612119 - - cD-- 6/1/0/0/0 0/0 {haproxy.lan:5000} &#34;GET /v2/texlive/blobs/sha256:2fde6c0b50af2b1fda7ed0092ad1f1cc6897d7cb723dfcb0d2bc15201bbd7191 HTTP/1.1&#34;&#xA;&#xA;The HAProxy docs on stream states:&#xA;&#xA;     cD   The client did not send nor acknowledge any data for as long as the&#xA;          &#34;timeout client&#34; delay. This is often caused by network failures on&#xA;          the client side, or the client simply leaving the net uncleanly.&#xA;&#xA;First flag c:&#xA;  On the first character, a code reporting the first event which caused the&#xA;    stream to terminate :&#xA;&#xA;        c : the client-side timeout expired while waiting for the client to&#xA;            send or receive data.&#xA;&#xA;Second flag D:&#xA;  on the second character, the TCP or HTTP stream state when it was closed :&#xA;&#xA;        D : the stream was in the DATA phase.&#xA;&#xA;That was useful - &#34;the client-side timeout expired&#34;. It simply means that I need to bump the client timeouts (to 30m from 5s in this example) in the HAProxy frontend for my Docker registry.&#xA;&#xA;frontend registryfront&#xA;    bind                 :5000&#xA;    timeout              client 30m # was 5s&#xA;    timeout              client-fin 30m # was 30s&#xA;    mode                 http&#xA;    option               httplog&#xA;                         # display host header in logs&#xA;    capture              request header Host len 30&#xA;&#xA;    default_backend      registry&#xA;&#xA;The pull request through the proxy afterwards show no error flags (----):&#xA;Sep 02 02:30:27 haproxy haproxy[2850]: 10.0.0.102:53506 [02/Sep/2025:02:27:03.674] registryfront registry/pi3 0/0/0/15/203648 200 2334919898 - - ---- 7/1/0/0/0 0/0 {haproxy.lan:5000} &#34;GET /v2/texlive/blobs/sha256:2fde6c0b50af2b1fda7ed0092ad1f1cc6897d7cb723dfcb0d2bc15201bbd7191 HTTP/1.1&#34;&#xA;&#xA;Podman pull succeeds 🎉&#xA;&#xA;Ping me in chat or Fediverse if you have more suggestions regarding HAProxy configuration for private Docker registries. Happy self-hosting!&#xA;&#xA;div style=&#34;text-align:center; font-size: 0.8em&#34;&#xD;&#xA;a href=&#34;https://write.in0rdr.ch/feed&#34;&amp;#128732; RSS/a | a href=&#34;https://m.in0rdr.ch/in0rdr&#34;&amp;#128024; Fediverse/a | a href=&#34;https://chat.in0rdr.ch/#/guest?join=p0c@conference.in0rdr.ch&#34;&amp;#128172; XMPP/a&#xD;&#xA;/div]]&gt;</description>
      <content:encoded><![CDATA[<p>This is a story about pulling Docker images through HAProxy in my home lab.</p>

<p><a href="https://write.in0rdr.ch/tag:selfhosting" class="hashtag"><span>#</span><span class="p-category">selfhosting</span></a> <a href="https://write.in0rdr.ch/tag:homelab" class="hashtag"><span>#</span><span class="p-category">homelab</span></a> <a href="https://write.in0rdr.ch/tag:docker" class="hashtag"><span>#</span><span class="p-category">docker</span></a> <a href="https://write.in0rdr.ch/tag:haproxy" class="hashtag"><span>#</span><span class="p-category">haproxy</span></a>
</p>

<p>I observed an interesting issue in my Jenkins pipeline. The image pull aborted with the following error message:</p>

<pre><code>Error: writing blob: storing blob to file &#34;/var/tmp/storage1360560957/1&#34;: happened during read: unexpected EOF
</code></pre>

<p>First I thought it has something to do with the storage. But I was wrong. The culprit was the network.</p>

<p>More specifically, I noticed that pulling through my HAProxy instance was the issue, but pulling through the nodes registry port (directly) was fine.</p>

<p>When looking into the HAProxy logs, I noticed that the requests fail with the particular error flags <code>cD</code>:</p>

<pre><code>Sep 02 02:23:29 haproxy haproxy[2836]: 10.0.0.102:34982 [02/Sep/2025:02:22:47.563] registryfront registry/pi3 0/0/0/87/42156 200 466612119 - - cD-- 6/1/0/0/0 0/0 {haproxy.lan:5000} &#34;GET /v2/texlive/blobs/sha256:2fde6c0b50af2b1fda7ed0092ad1f1cc6897d7cb723dfcb0d2bc15201bbd7191 HTTP/1.1&#34;
</code></pre>

<p>The <a href="https://www.haproxy.com/documentation/haproxy-configuration-manual/latest/#8.5">HAProxy docs</a> on stream states:</p>

<pre><code>     cD   The client did not send nor acknowledge any data for as long as the
          &#34;timeout client&#34; delay. This is often caused by network failures on
          the client side, or the client simply leaving the net uncleanly.
</code></pre>

<p>First flag <code>c</code>:</p>

<pre><code>  - On the first character, a code reporting the first event which caused the
    stream to terminate :

        c : the client-side timeout expired while waiting for the client to
            send or receive data.
</code></pre>

<p>Second flag <code>D</code>:</p>

<pre><code>  - on the second character, the TCP or HTTP stream state when it was closed :

        D : the stream was in the DATA phase.
</code></pre>

<p>That was useful – “the client-side timeout expired”. It simply means that I need to bump the client timeouts (to 30m from 5s in this example) in the HAProxy frontend for my Docker registry.</p>

<pre><code>frontend registryfront
    bind                 :5000
    timeout              client 30m # was 5s
    timeout              client-fin 30m # was 30s
    mode                 http
    option               httplog
                         # display host header in logs
    capture              request header Host len 30

    default_backend      registry
</code></pre>

<p>The pull request through the proxy afterwards show no error flags (<code>----</code>):</p>

<pre><code>Sep 02 02:30:27 haproxy haproxy[2850]: 10.0.0.102:53506 [02/Sep/2025:02:27:03.674] registryfront registry/pi3 0/0/0/15/203648 200 2334919898 - - ---- 7/1/0/0/0 0/0 {haproxy.lan:5000} &#34;GET /v2/texlive/blobs/sha256:2fde6c0b50af2b1fda7ed0092ad1f1cc6897d7cb723dfcb0d2bc15201bbd7191 HTTP/1.1&#34;
</code></pre>

<p>Podman pull succeeds 🎉</p>

<p>Ping me in chat or Fediverse if you have more suggestions regarding HAProxy configuration for private Docker registries. Happy self-hosting!</p>

<div style="text-align:center; font-size: 0.8em">
<a href="https://write.in0rdr.ch/feed">🛜 RSS</a> | <a href="https://m.in0rdr.ch/in0rdr">🐘 Fediverse</a> | <a href="https://chat.in0rdr.ch/#/guest?join=p0c@conference.in0rdr.ch">💬 XMPP</a>
</div>
]]></content:encoded>
      <guid>https://write.in0rdr.ch/docker-pull-through-haproxy</guid>
      <pubDate>Wed, 03 Sep 2025 14:17:02 +0000</pubDate>
    </item>
    <item>
      <title>Building a Skateboard Ledge</title>
      <link>https://write.in0rdr.ch/building-a-skateboard-ledge</link>
      <description>&lt;![CDATA[Took the chance to pursue a little hobby project - building my own skateboard ledge.&#xA;&#xA;#skateboarding #sport&#xA;!--more--&#xA;&#xA;The inspiration to build my own ledge came from a colleague who refurbished an old mini ramp. So I thought I could actually do a similar thing in my neighborhood and see how it works out.&#xA;&#xA;I followed the Red Bull guide to the dot. Only made the ledge a bit longer (250cm) because these are the standard wood ply lengths that you can buy around here.&#xA;&#xA;The final result is quite ok👌🏼&#xA;ledge waxed&#xA;&#xA;Of course, if you look closely, you can see that I&#39;m not the most skilled handyman :) Nevertheless, I&#39;m pretty happy with the result.&#xA;&#xA;I only have a few questions regarding the durability of the top sheet. Maybe a small 6mm weather resistant ply would be better, we will see..&#xA;&#xA;The ledge is waxed and ready to be tested in my neighborhood.&#xA;&#xA;ledge location&#xA;&#xA;I wonder how long it will take until someone protests or destroys/removes the ledge. Until then, it&#39;s basically my personal skatepark :) and open for all. Happy skateboarding folks!&#xA;&#xA;I&#39;ll update the thread here on the Fediverse once I have some footage, it&#39;s still raining cats around here ⛈️🐱&#xA;&#xA;div style=&#34;text-align:center; font-size: 0.8em&#34;&#xD;&#xA;a href=&#34;https://write.in0rdr.ch/feed&#34;&amp;#128732; RSS/a | a href=&#34;https://m.in0rdr.ch/in0rdr&#34;&amp;#128024; Fediverse/a | a href=&#34;https://chat.in0rdr.ch/#/guest?join=p0c@conference.in0rdr.ch&#34;&amp;#128172; XMPP/a&#xD;&#xA;/div]]&gt;</description>
      <content:encoded><![CDATA[<p>Took the chance to pursue a little hobby project – building my own skateboard ledge.</p>

<p><a href="https://write.in0rdr.ch/tag:skateboarding" class="hashtag"><span>#</span><span class="p-category">skateboarding</span></a> <a href="https://write.in0rdr.ch/tag:sport" class="hashtag"><span>#</span><span class="p-category">sport</span></a>
</p>

<p>The inspiration to build my own ledge came from a colleague who refurbished an old mini ramp. So I thought I could actually do a similar thing in my neighborhood and see how it works out.</p>

<p>I followed the <a href="https://www.redbull.com/de-de/skateboard-curbs-selber-bauen-die-anleitung">Red Bull guide</a> to the dot. Only made the ledge a bit longer (250cm) because these are the standard wood ply lengths that you can buy around here.</p>

<p>The final result is quite ok👌🏼
<img src="https://code.in0rdr.ch/pub/blog/ledge_waxed.jpeg" alt="ledge waxed"></p>

<p>Of course, if you look closely, you can see that I&#39;m not the most skilled handyman :) Nevertheless, I&#39;m pretty happy with the result.</p>

<p>I only have a few questions regarding the durability of the top sheet. Maybe a small 6mm weather resistant ply would be better, we will see..</p>

<p>The ledge is waxed and ready to be tested in my neighborhood.</p>

<p><img src="https://code.in0rdr.ch/pub/blog/ledge_location.jpeg" alt="ledge location"></p>

<p>I wonder how long it will take until someone protests or destroys/removes the ledge. Until then, it&#39;s basically my personal skatepark :) and open for all. Happy skateboarding folks!</p>

<p>I&#39;ll update the thread here on the Fediverse once I have some footage, it&#39;s still raining cats around here ⛈️🐱</p>

<div style="text-align:center; font-size: 0.8em">
<a href="https://write.in0rdr.ch/feed">🛜 RSS</a> | <a href="https://m.in0rdr.ch/in0rdr">🐘 Fediverse</a> | <a href="https://chat.in0rdr.ch/#/guest?join=p0c@conference.in0rdr.ch">💬 XMPP</a>
</div>
]]></content:encoded>
      <guid>https://write.in0rdr.ch/building-a-skateboard-ledge</guid>
      <pubDate>Sun, 27 Jul 2025 12:08:29 +0000</pubDate>
    </item>
    <item>
      <title>Borg2 backup with rclone</title>
      <link>https://write.in0rdr.ch/borg2-backup-with-rclone</link>
      <description>&lt;![CDATA[I recently started to figure out how to backup with Borg2 to cloud storage. Can be achieved conveniently since Borg2 2.0.0b11. At the time of this writing, I used 2.0.0b14. This post is simply a rambling on my backup journey to implement this in a semi-automated way on my Turris Omnia router.&#xA;&#xA;#borg #rclone #backup&#xA;!--more--&#xA;&#xA;A while ago I purchased quite some TB of storage at pCloud, a small Swiss cloud storage provider. Only recently, I found out that you can use rclone to push files and data to the cloud in an rsync-like fashion.&#xA;&#xA;Since I wanted to use that approach to backup files from my router, I first figured ways to install it with opkg. I followed the Turris docs to switch to a more recent feed/branch &#34;Here be Lions&#34; but noticed quickly, that I actually needed to install Borg2. What most distributions package nowadays is Borg (verison 1), which is the stable release. Borg2 is a breaking change (incompatible, but runs the new feature that supports rclone).&#xA;&#xA;Therefore, I decided to build a small Debian 12 lxc container and run Borg inside the container. That approach worked well.&#xA;&#xA;To mount a host path inside the container, I used the following modification of the lxc config file:&#xA;&#xA;Mount host ssd&#xA;lxc.mount.entry = /srv host-srv none bind,create=dir 0 0&#xA;&#xA;This will mount the /srv directory of the Turris host to the container, so the Borg process can backup files from that path.&#xA;&#xA;The rclone setup with pCloud was straight forward, I simply needed to confirm the login from my laptop with a browser, because the Turris cannot connect via browser to confirm the login on pCloud. Easy!&#xA;&#xA;The next step was configuring the Borg backup repository on pCloud.&#xA;&#xA;My borg configuration points to the backup folder on pCloud:&#xA;~/.config/borg/config &#xA;BORGREPO=&#39;rclone:pcloud:backup-folder&#39;&#xA;BORGPASSPHRASE=&#39;***&#39;&#xA;PATTERNSFILE=&#39;/root/.config/borg/patterns.lst&#39;&#xA;&#xA;I also maintain a patterns file, but won&#39;t go into details here.&#xA;&#xA;borg repo-create --encryption=repokey-chacha20-poly1305&#xA;&#xA;⚠️ Note that the repo-create and also a lot of other commands in Borg2 are similar, but different from the Borg v1 commands. A little bit confusing when reading the docs, but you&#39;ll get the hang of it..&#xA;&#xA;I chose to use chacha20-poly1305 encryption mode, because that combination was suggested to me as the fastest algorithm on my Turris. You can find out the speed of the different supported algorithms by running:&#xA;&#xA;borg benchmark cpu&#xA;&#xA;This is yet another convenient feature of Borg2.&#xA;&#xA;Lastly, I started the backup using a Systemd timer and some variation of borg create combined with another prune command to prune old backups.&#xA;&#xA;So far, so good. I&#39;m happy that I finally found a suitable solution for my backup. The process seems quite slow though. In my create borg command I included the --compression lz4 (which should be Speedy Gonzales), but more than 8h for the first backup? Common..&#xA;&#xA;speedy-gonzales&#xA;&#xA;I found a small signal trick that allowed me to check the current state of the upload process. This will show me the current file of the upload process:&#xA;&#xA;kill -s USR1 $(pidof python) &amp;&amp; journalctl -eu borg-backup | tail -1&#xA;&#xA;I could also contribute a small improvement in the Borg docs regarding the compilation of the dependencies required for Borg2 🎉 so the hassle was worth it.&#xA;&#xA;For automation of the entire procedure, I created a small packer build script that can rebuild the lxc container from scratch whenever needed. Essentially, it contains the commands to install the prerequisites for Borg on Debian followed by the installation with Pip in an virtual environment and the configuration with rclone.&#xA;&#xA;Edit 2024-12-17&#xA;&#xA;Turris Omnia Borg benchmark:&#xA;Chunkers =======================================================&#xA;buzhash,19,23,21,4095    1GB        12.052s&#xA;fixed,1048576            1GB        2.655s&#xA;Non-cryptographic checksums / hashes ===========================&#xA;xxh64                    1GB        2.394s&#xA;crc32 (zlib)             1GB        2.970s&#xA;Cryptographic hashes / MACs ====================================&#xA;hmac-sha256              1GB        10.785s&#xA;blake2b-256              1GB        24.264s&#xA;Encryption =====================================================&#xA;aes-256-ctr-hmac-sha256  1GB        39.176s&#xA;aes-256-ctr-blake2b      1GB        51.622s&#xA;aes-256-ocb              1GB        34.483s&#xA;chacha20-poly1305        1GB        12.335s&#xA;KDFs (slow is GOOD, use argon2!) ===============================&#xA;pbkdf2                   5          1.969s&#xA;argon2                   5          7.606s&#xA;Compression ====================================================&#xA;lz4          0.1GB      0.239s&#xA;zstd,1       0.1GB      0.739s&#xA;zstd,3       0.1GB      0.923s&#xA;zstd,5       0.1GB      16.528s&#xA;zstd,10      0.1GB      26.171s&#xA;zstd,16      0.1GB      51.617s&#xA;zstd,22      0.1GB      64.239s&#xA;zlib,0       0.1GB      0.703s&#xA;zlib,6       0.1GB      15.758s&#xA;zlib,9       0.1GB      16.217s&#xA;lzma,0       0.1GB      88.872s&#xA;lzma,6       0.1GB      114.931s&#xA;lzma,9       0.1GB      96.051s&#xA;msgpack ========================================================&#xA;msgpack      100k Items 0.818s&#xA;&#xA;For comparison, on my x230, where I would choose blake2 for hashing (blake2-chacha20-poly1305):&#xA;Chunkers =======================================================&#xA;buzhash,19,23,21,4095    1GB        1.360s&#xA;fixed,1048576            1GB        0.134s&#xA;Non-cryptographic checksums / hashes ===========================&#xA;xxh64                    1GB        0.097s&#xA;crc32 (zlib)             1GB        0.406s&#xA;Cryptographic hashes / MACs ====================================&#xA;hmac-sha256              1GB        3.474s&#xA;blake2b-256              1GB        1.980s&#xA;Encryption =====================================================&#xA;aes-256-ctr-hmac-sha256  1GB        3.901s&#xA;aes-256-ctr-blake2b      1GB        3.938s&#xA;aes-256-ocb              1GB        0.572s&#xA;chacha20-poly1305        1GB        1.251s&#xA;KDFs (slow is GOOD, use argon2!) ===============================&#xA;pbkdf2                   5          0.362s&#xA;argon2                   5          0.434s&#xA;Compression ====================================================&#xA;lz4          0.1GB      0.022s&#xA;zstd,1       0.1GB      0.041s&#xA;zstd,3       0.1GB      0.062s&#xA;zstd,5       0.1GB      0.104s&#xA;zstd,10      0.1GB      0.188s&#xA;zstd,16      0.1GB      13.794s&#xA;zstd,22      0.1GB      14.795s&#xA;zlib,0       0.1GB      0.067s&#xA;zlib,6       0.1GB      2.735s&#xA;zlib,9       0.1GB      2.740s&#xA;lzma,0       0.1GB      18.819s&#xA;lzma,6       0.1GB      36.049s&#xA;lzma,9       0.1GB      30.293s&#xA;msgpack ========================================================&#xA;msgpack      100k Items 0.258s&#xA;&#xA;div style=&#34;text-align:center; font-size: 0.8em&#34;&#xD;&#xA;a href=&#34;https://write.in0rdr.ch/feed&#34;&amp;#128732; RSS/a | a href=&#34;https://m.in0rdr.ch/in0rdr&#34;&amp;#128024; Fediverse/a | a href=&#34;https://chat.in0rdr.ch/#/guest?join=p0c@conference.in0rdr.ch&#34;&amp;#128172; XMPP/a&#xD;&#xA;/div]]&gt;</description>
      <content:encoded><![CDATA[<p>I recently started to figure out how to backup with Borg2 to cloud storage. Can be achieved conveniently since <a href="https://github.com/borgbackup/borg/blob/master/docs/changes.rst#version-200b11-2024-09-26">Borg2 2.0.0b11</a>. At the time of this writing, I used 2.0.0b14. This post is simply a rambling on my backup journey to implement this in a semi-automated way on my Turris Omnia router.</p>

<p><a href="https://write.in0rdr.ch/tag:borg" class="hashtag"><span>#</span><span class="p-category">borg</span></a> <a href="https://write.in0rdr.ch/tag:rclone" class="hashtag"><span>#</span><span class="p-category">rclone</span></a> <a href="https://write.in0rdr.ch/tag:backup" class="hashtag"><span>#</span><span class="p-category">backup</span></a>
</p>

<p>A while ago I purchased quite some TB of storage at <a href="https://de.wikipedia.org/wiki/PCloud">pCloud</a>, a small Swiss cloud storage provider. Only recently, I found out that you can use <a href="https://en.wikipedia.org/wiki/Rclone">rclone</a> to push files and data to the cloud in an rsync-like fashion.</p>

<p>Since I wanted to use that approach to backup files from my router, I first figured ways to install it with opkg. I followed the Turris docs to switch to a <a href="https://gitlab.nic.cz/turris/os/build/blob/hbk/WORKFLOW.adoc#user-content-here-be-lions-hbl">more recent feed/branch “Here be Lions”</a> but noticed quickly, that I actually needed to install Borg2. What most distributions package nowadays is Borg (verison 1), which is the stable release. Borg2 is a breaking change (incompatible, but runs the new feature that supports rclone).</p>

<p>Therefore, I decided to build a small Debian 12 lxc container and run Borg inside the container. That approach worked well.</p>

<p>To mount a host path inside the container, I used the following modification of the lxc config file:</p>

<pre><code># Mount host ssd
lxc.mount.entry = /srv host-srv none bind,create=dir 0 0
</code></pre>

<p>This will mount the <code>/srv</code> directory of the Turris host to the container, so the Borg process can backup files from that path.</p>

<p>The <a href="https://rclone.org/pcloud/">rclone setup with pCloud</a> was straight forward, I simply needed to confirm the login from my laptop with a browser, because the Turris cannot connect via browser to confirm the login on pCloud. Easy!</p>

<p>The next step was configuring the Borg backup repository on pCloud.</p>

<p>My borg configuration points to the backup folder on pCloud:</p>

<pre><code># ~/.config/borg/config 
BORG_REPO=&#39;rclone:pcloud:backup-folder&#39;
BORG_PASSPHRASE=&#39;***&#39;
PATTERNSFILE=&#39;/root/.config/borg/patterns.lst&#39;
</code></pre>

<p>I also maintain a <a href="https://borgbackup.readthedocs.io/en/stable/usage/help.html#borg-patterns">patterns file</a>, but won&#39;t go into details here.</p>

<pre><code class="language-bash">borg repo-create --encryption=repokey-chacha20-poly1305
</code></pre>

<p>⚠️ Note that the <code>repo-create</code> and also a lot of other commands in Borg2 are similar, but different from the Borg v1 commands. A little bit confusing when reading the docs, but you&#39;ll get the hang of it..</p>

<p>I chose to use <a href="https://borgbackup.readthedocs.io/en/master/usage/repo-create.html#choosing-an-encryption-mode"><code>chacha20-poly1305</code> encryption mode</a>, because that combination was suggested to me as the fastest algorithm on my Turris. You can find out the speed of the different supported algorithms by running:</p>

<pre><code class="language-bash">borg benchmark cpu
</code></pre>

<p>This is yet another convenient feature of Borg2.</p>

<p>Lastly, I started the backup using a Systemd timer and some variation of <code>borg create</code> combined with another <code>prune</code> command to prune old backups.</p>

<p>So far, so good. I&#39;m happy that I finally found a suitable solution for my backup. The process seems quite slow though. In my <code>create</code> borg command I included the <code>--compression lz4</code> (which should be <a href="https://en.wikipedia.org/wiki/Speedy_Gonzales">Speedy Gonzales</a>), but more than 8h for the first backup? Common..</p>

<p><img src="https://upload.wikimedia.org/wikipedia/en/thumb/f/fe/Speedy_Gonzales.svg/220px-Speedy_Gonzales.svg.png" alt="speedy-gonzales"></p>

<p>I found a small signal trick that allowed me to check the current state of the upload process. This will <a href="https://github.com/borgbackup/borg/issues/2419">show me the current file of the upload</a> process:</p>

<pre><code class="language-bash">kill -s USR1 $(pidof python) &amp;&amp; journalctl -eu borg-backup | tail -1
</code></pre>

<p>I could also contribute a <a href="https://github.com/borgbackup/borg/pull/8586">small improvement</a> in the Borg docs regarding the compilation of the dependencies required for Borg2 🎉 so the hassle was worth it.</p>

<p>For automation of the entire procedure, I created a small packer build script that can rebuild the lxc container from scratch whenever needed. Essentially, it contains the commands to install the <a href="https://borgbackup.readthedocs.io/en/master/installation.html#debian-ubuntu">prerequisites for Borg on Debian</a> followed by the installation with Pip in an virtual environment and the configuration with rclone.</p>

<h3 id="edit-2024-12-17">Edit 2024-12-17</h3>

<p>Turris Omnia Borg benchmark:</p>

<pre><code>Chunkers =======================================================
buzhash,19,23,21,4095    1GB        12.052s
fixed,1048576            1GB        2.655s
Non-cryptographic checksums / hashes ===========================
xxh64                    1GB        2.394s
crc32 (zlib)             1GB        2.970s
Cryptographic hashes / MACs ====================================
hmac-sha256              1GB        10.785s
blake2b-256              1GB        24.264s
Encryption =====================================================
aes-256-ctr-hmac-sha256  1GB        39.176s
aes-256-ctr-blake2b      1GB        51.622s
aes-256-ocb              1GB        34.483s
chacha20-poly1305        1GB        12.335s
KDFs (slow is GOOD, use argon2!) ===============================
pbkdf2                   5          1.969s
argon2                   5          7.606s
Compression ====================================================
lz4          0.1GB      0.239s
zstd,1       0.1GB      0.739s
zstd,3       0.1GB      0.923s
zstd,5       0.1GB      16.528s
zstd,10      0.1GB      26.171s
zstd,16      0.1GB      51.617s
zstd,22      0.1GB      64.239s
zlib,0       0.1GB      0.703s
zlib,6       0.1GB      15.758s
zlib,9       0.1GB      16.217s
lzma,0       0.1GB      88.872s
lzma,6       0.1GB      114.931s
lzma,9       0.1GB      96.051s
msgpack ========================================================
msgpack      100k Items 0.818s
</code></pre>

<p>For comparison, on my x230, where <a href="https://borgbackup.readthedocs.io/en/master/usage/repo-create.html#choosing-an-encryption-mode">I would choose blake2 for hashing (<code>blake2-chacha20-poly1305</code>)</a>:</p>

<pre><code>Chunkers =======================================================
buzhash,19,23,21,4095    1GB        1.360s
fixed,1048576            1GB        0.134s
Non-cryptographic checksums / hashes ===========================
xxh64                    1GB        0.097s
crc32 (zlib)             1GB        0.406s
Cryptographic hashes / MACs ====================================
hmac-sha256              1GB        3.474s
blake2b-256              1GB        1.980s
Encryption =====================================================
aes-256-ctr-hmac-sha256  1GB        3.901s
aes-256-ctr-blake2b      1GB        3.938s
aes-256-ocb              1GB        0.572s
chacha20-poly1305        1GB        1.251s
KDFs (slow is GOOD, use argon2!) ===============================
pbkdf2                   5          0.362s
argon2                   5          0.434s
Compression ====================================================
lz4          0.1GB      0.022s
zstd,1       0.1GB      0.041s
zstd,3       0.1GB      0.062s
zstd,5       0.1GB      0.104s
zstd,10      0.1GB      0.188s
zstd,16      0.1GB      13.794s
zstd,22      0.1GB      14.795s
zlib,0       0.1GB      0.067s
zlib,6       0.1GB      2.735s
zlib,9       0.1GB      2.740s
lzma,0       0.1GB      18.819s
lzma,6       0.1GB      36.049s
lzma,9       0.1GB      30.293s
msgpack ========================================================
msgpack      100k Items 0.258s
</code></pre>

<div style="text-align:center; font-size: 0.8em">
<a href="https://write.in0rdr.ch/feed">🛜 RSS</a> | <a href="https://m.in0rdr.ch/in0rdr">🐘 Fediverse</a> | <a href="https://chat.in0rdr.ch/#/guest?join=p0c@conference.in0rdr.ch">💬 XMPP</a>
</div>
]]></content:encoded>
      <guid>https://write.in0rdr.ch/borg2-backup-with-rclone</guid>
      <pubDate>Mon, 16 Dec 2024 21:27:15 +0000</pubDate>
    </item>
    <item>
      <title>Post-quantum TLS: Ready or not, here I come.. 🎵</title>
      <link>https://write.in0rdr.ch/post-quantum-tls-ready-or-not-here-i-come</link>
      <description>&lt;![CDATA[I was looking into the state of post-quantum (PQ) TLS lately. This short article summarizes how you can create PQ-ready TLS certificates today.&#xA;&#xA;#tls #pqc #PKI #certificates&#xA;!--more--&#xA;&#xA;liboqs is a library for prototyping and experimenting with quantum-resistant cryptography. It&#39;s used by the unsupported fork of OpenSSL of the Open Quantum Safe (OQS) project. With that fork, we can easily create TSL certificates that can be used in a PQ-secure client-server TLS communication.&#xA;&#xA;First you have to build the library, liboqs. On NixOS, I use these dependencies for the build:&#xA;&#xA;shell.liboqs.nix&#xA;{ pkgs ? import nixpkgs {} }:&#xA;  pkgs.mkShell {&#xA;    nativeBuildInputs = with pkgs.buildPackages; [&#xA;      cmake&#xA;      pkg-config&#xA;      openssl&#xA;      ninja&#xA;      libtool&#xA;      gcc&#xA;    ];&#xA;    buildInputs = with pkgs.buildPackages; [ openssl ];&#xA;    LDLIBRARYPATH = pkgs.lib.makeLibraryPath [ pkgs.openssl ];&#xA;}&#xA;&#xA;The build process itself is straight forward. It involves building liboqs and the OpenSSL fork:&#xA;prepare environment&#xA;$ nix-shell shell.liboqs.nix&#xA;&#xA;clone openssl fork&#xA;$ git clone --branch OQS-OpenSSL111-stable \&#xA;   https://github.com/open-quantum-safe/openssl.git openssl.git&#xA;&#xA;clone liboqs&#xA;$ git clone -b main https://github.com/open-quantum-safe/liboqs.git liboqs.git&#xA;&#xA;$ cd liboqs.git&#xA;&#xA;build liboqs into the folder of the openssl fork (CMAKEINSTALLPREFIX)&#xA;$ cmake -GNinja -DCMAKEINSTALLPREFIX=$HOME/Downloads/pqc/openssl.git/&#xA;&#xA;openssl   = 3.0.0 (3.0.14)&#xA;$ oqs -DOQSUSEOPENSSL=OFF ..&#xA;&#xA;$ ninja&#xA;$ ninja install&#xA;&#xA;build openssl fork&#xA;$ cd ../openssl.git&#xA;$ ./Configure no-shared linux-x8664 -lm&#xA;$ make -j&#xA;&#xA;$ apps/openssl version&#xA;OpenSSL 1.1.1u  30 May 2023, Open Quantum Safe 2023-07&#xA;&#xA;Luckily, the project includes short instructions on how to use the PQ-ready OpenSSL version to create web server certificates:&#xA;&#xA;create hybrid rsa/dilithium CA with the provided ssl config&#xA;$ apps/openssl req -x509 -new -newkey rsa3072dilithium2 \&#xA;   -keyout rsa3072dilithium2CA.key -out rsa3072dilithium2CA.crt \&#xA;   -nodes -subj &#34;/CN=oqstest CA&#34; -days 365 -config apps/openssl.cnf&#xA;&#xA;check CA certificate&#xA;$ apps/openssl x509 -in rsa3072dilithium2CA.crt -noout -text&#xA;&#xA;create hybrid rsa/dilithium server cert&#xA;$ apps/openssl req -new -newkey rsa3072dilithium2 \&#xA;   -keyout rsa3072dilithium2srv.key -out rsa3072dilithium2srv.csr \&#xA;   -nodes -subj &#34;/CN=oqstest server&#34; -config apps/openssl.cnf&#xA;&#xA;sign server cert with CA cert&#xA;$ apps/openssl x509 -req -in rsa3072dilithium2srv.csr \&#xA;   -out rsa3072dilithium2srv.crt -CA rsa3072dilithium2CA.crt \&#xA;   -CAkey rsa3072dilithium2CA.key -CAcreateserial -days 365&#xA;&#xA;check server cert&#xA;$ apps/openssl x509 -in rsa3072dilithium2srv.crt -noout -text&#xA;&#xA;run the server&#xA;$ apps/openssl sserver -cert rsa3072dilithium2srv.crt \&#xA;   -key rsa3072dilithium2srv.key -www -tls13&#xA;run the client with kyber KEX&#xA;apps/openssl sclient -groups p384kyber768 -CAfile rsa3072dilithium2CA.crt&#xA;&#xA;What I noticed during playing with the new algorithms: The term &#34;Hybrid&#34; does not mean you can choose the type of certificate for the Key exchange (KEX) or signature verficiation SIG standard. It simply means that you need both. Think of it as a fallback. If Kyber (KEX) or Dilithium (SIG) would turn out to not be that secure as everyone thought, your TLS communication (key exchange and signature verification) will still be backed by a proven industry standard algorithm (RSA or ECDSA), because you will always need to apply both algorithms to verify the signature or decrypt the traffic. Of course, this has impact on performance (e.g., time to create, encrypt/decrypt and/or verify).&#xA;&#xA;Lastly, I also checked out how I can sign a message in PQ-safe way using the Cryptographic Message Syntax (CMS):&#xA;&#xA;sign file&#xA;$ apps/openssl dgst -sign rsa3072dilithium2srv.key -sha256 \&#xA;   -out binary.sig -binary binary&#xA;&#xA;extract pubkey&#xA;$ apps/openssl x509 -in rsa3072dilithium2srv.crt -noout \&#xA;   -pubkey   rsa3072dilithium2srv.pem&#xA;&#xA;check signature&#xA;$ apps/openssl dgst -verify rsa3072dilithium2srv.pem -sha256 \&#xA;   -signature binary.sig -binary binary&#xA;&#xA;Of course we could go on playing with these demos, for instance, by building an application (like Nginx or curl, see oqs-demos) with support for the new algorithms. Oh, and don&#39;t forget VPNs..&#xA;&#xA;There is also a website that shows you where these new algorithms typically fail: https://tldr.fail. Also, I was wondering, when I can simply request these hybrid certifcates from my known and loved HashiCorp Vault PKI 🤗?&#xA;&#xA;https://github.com/hashicorp/vault/issues/27239&#xA;https://www.hashicorp.com/blog/nist-s-post-quantum-cryptography-standards-our-plans&#xA;&#xA;It will take some time, but I&#39;m ready (also enabled that feature toggle in my Firefox to let servers now). Let me know what you think about the topic.&#xA;&#xA;(now that I wrote the blog post I can go ahead and delete that temporary folder on my Desktop)&#xA;&#xA;div style=&#34;text-align:center; font-size: 0.8em&#34;&#xD;&#xA;a href=&#34;https://write.in0rdr.ch/feed&#34;&amp;#128732; RSS/a | a href=&#34;https://m.in0rdr.ch/in0rdr&#34;&amp;#128024; Fediverse/a | a href=&#34;https://chat.in0rdr.ch/#/guest?join=p0c@conference.in0rdr.ch&#34;&amp;#128172; XMPP/a&#xD;&#xA;/div]]&gt;</description>
      <content:encoded><![CDATA[<p>I was looking into the state of <a href="https://www.microsoft.com/en-us/research/project/post-quantum-tls/">post-quantum (PQ) TLS</a> lately. This short article summarizes how you can create PQ-ready TLS certificates today.</p>

<p><a href="https://write.in0rdr.ch/tag:tls" class="hashtag"><span>#</span><span class="p-category">tls</span></a> <a href="https://write.in0rdr.ch/tag:pqc" class="hashtag"><span>#</span><span class="p-category">pqc</span></a> <a href="https://write.in0rdr.ch/tag:PKI" class="hashtag"><span>#</span><span class="p-category">PKI</span></a> <a href="https://write.in0rdr.ch/tag:certificates" class="hashtag"><span>#</span><span class="p-category">certificates</span></a>
</p>

<p><a href="https://github.com/open-quantum-safe/liboqs">liboqs</a> is a library for prototyping and experimenting with quantum-resistant cryptography. It&#39;s used by the <a href="https://github.com/open-quantum-safe/openssl"><em>unsupported fork</em> of OpenSSL</a> of the Open Quantum Safe (OQS) project. With that fork, we can easily create TSL certificates that can be used in a PQ-secure client-server TLS communication.</p>

<p>First you have to build the library, <code>liboqs</code>. On NixOS, I use these dependencies for the build:</p>

<pre><code class="language-javascript"># shell.liboqs.nix
{ pkgs ? import &lt;nixpkgs&gt; {} }:
  pkgs.mkShell {
    nativeBuildInputs = with pkgs.buildPackages; [
      cmake
      pkg-config
      openssl
      ninja
      libtool
      gcc
    ];
    buildInputs = with pkgs.buildPackages; [ openssl ];
    LD_LIBRARY_PATH = pkgs.lib.makeLibraryPath [ pkgs.openssl ];
}
</code></pre>

<p>The <a href="https://github.com/open-quantum-safe/openssl/tree/OQS-OpenSSL_1_1_1-stable?tab=readme-ov-file#step-1-build-and-install-liboqs">build process</a> itself is straight forward. It involves building <code>liboqs</code> and the OpenSSL fork:</p>

<pre><code class="language-bash"># prepare environment
$ nix-shell shell.liboqs.nix

# clone openssl fork
$ git clone --branch OQS-OpenSSL_1_1_1-stable \
   https://github.com/open-quantum-safe/openssl.git openssl.git

# clone liboqs
$ git clone -b main https://github.com/open-quantum-safe/liboqs.git liboqs.git

$ cd liboqs.git

# build liboqs into the folder of the openssl fork (CMAKE_INSTALL_PREFIX)
$ cmake -GNinja -DCMAKE_INSTALL_PREFIX=$HOME/Downloads/pqc/openssl.git/

# openssl &gt;= 3.0.0 (3.0.14)
$ oqs -DOQS_USE_OPENSSL=OFF ..

$ ninja
$ ninja install

# build openssl fork
$ cd ../openssl.git
$ ./Configure no-shared linux-x86_64 -lm
$ make -j

$ apps/openssl version
OpenSSL 1.1.1u  30 May 2023, Open Quantum Safe 2023-07
</code></pre>

<p>Luckily, the project includes <a href="https://github.com/open-quantum-safe/openssl/tree/OQS-OpenSSL_1_1_1-stable?tab=readme-ov-file#running">short instructions</a> on how to use the PQ-ready OpenSSL version to create web server certificates:</p>

<pre><code class="language-bash"># create hybrid rsa/dilithium CA with the provided ssl config
$ apps/openssl req -x509 -new -newkey rsa3072_dilithium2 \
   -keyout rsa3072_dilithium2_CA.key -out rsa3072_dilithium2_CA.crt \
   -nodes -subj &#34;/CN=oqstest CA&#34; -days 365 -config apps/openssl.cnf

# check CA certificate
$ apps/openssl x509 -in rsa3072_dilithium2_CA.crt -noout -text

# create hybrid rsa/dilithium server cert
$ apps/openssl req -new -newkey rsa3072_dilithium2 \
   -keyout rsa3072_dilithium2_srv.key -out rsa3072_dilithium2_srv.csr \
   -nodes -subj &#34;/CN=oqstest server&#34; -config apps/openssl.cnf

# sign server cert with CA cert
$ apps/openssl x509 -req -in rsa3072_dilithium2_srv.csr \
   -out rsa3072_dilithium2_srv.crt -CA rsa3072_dilithium2_CA.crt \
   -CAkey rsa3072_dilithium2_CA.key -CAcreateserial -days 365

# check server cert
$ apps/openssl x509 -in rsa3072_dilithium2_srv.crt -noout -text

# run the server
$ apps/openssl s_server -cert rsa3072_dilithium2_srv.crt \
   -key rsa3072_dilithium2_srv.key -www -tls1_3
</code></pre>

<pre><code class="language-bash"># run the client with kyber KEX
apps/openssl s_client -groups p384_kyber768 -CAfile rsa3072_dilithium2_CA.crt
</code></pre>

<p>What I noticed during playing with the new algorithms: The term “Hybrid” does not mean you can choose the type of certificate for the Key exchange (<code>&lt;KEX&gt;</code>) or signature verficiation <code>&lt;SIG&gt;</code> standard. It simply means that you need both. Think of it as a fallback. If Kyber (<code>&lt;KEX&gt;</code>) or Dilithium (<code>&lt;SIG&gt;</code>) would turn out to not be that secure as everyone thought, your TLS communication (key exchange and signature verification) will still be backed by a proven industry standard algorithm (RSA or ECDSA), because you will always need to apply both algorithms to verify the signature or decrypt the traffic. Of course, this has impact on performance (e.g., time to create, encrypt/decrypt and/or verify).</p>

<p>Lastly, I also checked out how I can sign a message in PQ-safe way using the <a href="https://en.wikipedia.org/wiki/Cryptographic_Message_Syntax">Cryptographic Message Syntax (CMS)</a>:</p>

<pre><code class="language-bash"># sign file
$ apps/openssl dgst -sign rsa3072_dilithium2_srv.key -sha256 \
   -out binary.sig -binary binary

# extract pubkey
$ apps/openssl x509 -in rsa3072_dilithium2_srv.crt -noout \
   -pubkey &gt; rsa3072_dilithium2_srv.pem

# check signature
$ apps/openssl dgst -verify rsa3072_dilithium2_srv.pem -sha256 \
   -signature binary.sig -binary binary
</code></pre>

<p>Of course we could go on playing with these demos, for instance, by building an application (like Nginx or curl, see <a href="https://github.com/open-quantum-safe/oqs-demos"><code>oqs-demos</code></a>) with support for the new algorithms. Oh, and don&#39;t forget VPNs..</p>

<p>There is also a website that shows you where these new algorithms typically fail: <a href="https://tldr.fail">https://tldr.fail</a>. Also, I was wondering, when I can simply request these hybrid certifcates from my known and loved <a href="https://developer.hashicorp.com/vault/docs/secrets/pki">HashiCorp Vault PKI</a> 🤗?</p>
<ul><li><a href="https://github.com/hashicorp/vault/issues/27239">https://github.com/hashicorp/vault/issues/27239</a></li>
<li><a href="https://www.hashicorp.com/blog/nist-s-post-quantum-cryptography-standards-our-plans">https://www.hashicorp.com/blog/nist-s-post-quantum-cryptography-standards-our-plans</a></li></ul>

<p>It will take some time, but I&#39;m ready (also enabled that feature toggle in my Firefox to let servers now). Let me know what you think about the topic.</p>

<p>(now that I wrote the blog post I can go ahead and delete that temporary folder on my Desktop)</p>

<div style="text-align:center; font-size: 0.8em">
<a href="https://write.in0rdr.ch/feed">🛜 RSS</a> | <a href="https://m.in0rdr.ch/in0rdr">🐘 Fediverse</a> | <a href="https://chat.in0rdr.ch/#/guest?join=p0c@conference.in0rdr.ch">💬 XMPP</a>
</div>
]]></content:encoded>
      <guid>https://write.in0rdr.ch/post-quantum-tls-ready-or-not-here-i-come</guid>
      <pubDate>Mon, 07 Oct 2024 06:23:45 +0000</pubDate>
    </item>
    <item>
      <title>Bump NPM dependencies with Updatecli</title>
      <link>https://write.in0rdr.ch/bump-npm-dependencies-with-updatecli</link>
      <description>&lt;![CDATA[I built a new Jenkins pipeline based on Updatecli for updating the NPM packages in my hobby project MyHeats.&#xA;&#xA;#updatecli #pipeline #jenkins #myheats #nodejs #npm&#xA;!--more--&#xA;&#xA;I was looking for a way to automatically bump the version of the npm dependencies (package.json) whenever there is an update available. This is also important for security reasons (e.g., have a look at the output of npm audit from time to time to see the recent security issues in the dependencies).&#xA;&#xA;I was looking into Renovate and Dependabot, but neither of these scratched my itch of simple automatic dependency updates.&#xA;&#xA;A coworker suggested me to try Updatecli and it fits my workflows perfectly well. The Jenkins example on the projects website got me started. So I created a Jenkins shared library function to run my own build, which includes npm to perform the version bumps:&#xA;&#xA;A class to describe the updatecli stages: https://code.in0rdr.ch/jenkins-lib/file/src/Updatecli.groovy.html&#xA;&#xA;The scripted pipeline in the repository of the application loads the library and performs the version bumps to a new branch:&#xA;&#xA;The Jenkinsfile that makes use of the updatecli groovy library: https://code.in0rdr.ch/myheats/file/Jenkinsfile.html&#xA;&#xA;I did not even have to configure Updatecli a lot, because the autodiscovery feature automatically detects that this is a npm repository/project. The final version of my pipeline includes all the git/scm steps in the updatecli.d/default.yaml configuration file:&#xA;&#xA;Updatecli configuration file: https://code.in0rdr.ch/myheats/file/updatecli.d/default.yaml.html&#xA;&#xA;First I tried to perform the SCM/git steps in Jenkins checkout and sh steps. But I noticed it could be much sleeker by defining the SCM/git settings in the Updatecli config file directly. This way, updatecli takes care of the clone/checkout/push steps. Here the extract from my previous pipeline with the &#34;manual git steps&#34; for comparison:&#xA;&#xA;// alternative approach I did not pursue any further&#xA;sh &#39;&#39;&#39;&#xA;git config --global user.name &#34;$GITAUTHORNAME&#34;&#xA;git config --global user.email &#34;$GITAUTHOREMAIL&#34;&#xA;&#39;&#39;&#39;&#xA;&#xA;dir(&#34;myyheats.git-$BUILDNUMBER&#34;) {&#xA;  // checkout update branch in new directory&#xA;  checkout scmGit(&#xA;      extensions: [localBranch(&#34;$branch&#34;)],&#xA;      userRemoteConfigs: [[url: &#39;https://git.in0rdr.ch/myheats.git&#39;]]&#xA;  )&#xA;&#xA;  updatecli.run(&#39;apply&#39;)&#xA;&#xA;  // commit changes&#xA;  sh &#39;&#39;&#39;&#xA;  git add -u&#xA;  git commit -m &#34;chore(updatecli-$BUILDNUMBER): bump node modules&#34;&#xA;  git push -f -u origin &#34;$branch&#34;&#xA;  &#39;&#39;&#39;&#xA;}&#xA;&#xA;I definitely like the updatecli configuration better, since it keeps the actual pipeline tidy. Also, I like how you can use the {{ requiredEnv &#34;GIT_PASSWORD&#34; }} configuration in updatecli to read secrets from the environment. The Git credentials are sourced from OpenBao with Nomad workload identities.&#xA;&#xA;I hope the post is helpful for anyone that would like to give updatecli a try or would like to configure a similar Jenkins pipeline.&#xA;&#xA;div style=&#34;text-align:center; font-size: 0.8em&#34;&#xD;&#xA;a href=&#34;https://write.in0rdr.ch/feed&#34;&amp;#128732; RSS/a | a href=&#34;https://m.in0rdr.ch/in0rdr&#34;&amp;#128024; Fediverse/a | a href=&#34;https://chat.in0rdr.ch/#/guest?join=p0c@conference.in0rdr.ch&#34;&amp;#128172; XMPP/a&#xD;&#xA;/div]]&gt;</description>
      <content:encoded><![CDATA[<p>I built a new Jenkins pipeline based on <a href="https://www.updatecli.io">Updatecli</a> for updating the NPM packages in my hobby project <a href="https://myheats.p0c.ch">MyHeats</a>.</p>

<p><a href="https://write.in0rdr.ch/tag:updatecli" class="hashtag"><span>#</span><span class="p-category">updatecli</span></a> <a href="https://write.in0rdr.ch/tag:pipeline" class="hashtag"><span>#</span><span class="p-category">pipeline</span></a> <a href="https://write.in0rdr.ch/tag:jenkins" class="hashtag"><span>#</span><span class="p-category">jenkins</span></a> <a href="https://write.in0rdr.ch/tag:myheats" class="hashtag"><span>#</span><span class="p-category">myheats</span></a> <a href="https://write.in0rdr.ch/tag:nodejs" class="hashtag"><span>#</span><span class="p-category">nodejs</span></a> <a href="https://write.in0rdr.ch/tag:npm" class="hashtag"><span>#</span><span class="p-category">npm</span></a>
</p>

<p>I was looking for a way to automatically bump the version of the npm dependencies (<code>package.json</code>) whenever there is an update available. This is also important for security reasons (e.g., have a look at the output of <code>npm audit</code> from time to time to see the recent security issues in the dependencies).</p>

<p>I was looking into <a href="https://github.com/renovatebot/renovate">Renovate</a> and <a href="https://github.com/dependabot">Dependabot</a>, but neither of these scratched my itch of simple automatic dependency updates.</p>

<p>A coworker suggested me to try <a href="https://www.updatecli.io">Updatecli</a> and it fits my workflows perfectly well. The <a href="https://www.updatecli.io/docs/automate/jenkins">Jenkins example</a> on the projects website got me started. So I created a <a href="https://www.jenkins.io/doc/book/pipeline/shared-libraries">Jenkins shared library function</a> to run my own build, which includes <code>npm</code> to perform the version bumps:</p>
<ul><li>A class to describe the updatecli stages: <a href="https://code.in0rdr.ch/jenkins-lib/file/src/Updatecli.groovy.html">https://code.in0rdr.ch/jenkins-lib/file/src/Updatecli.groovy.html</a></li></ul>

<p>The scripted pipeline in the repository of the application loads the library and performs the version bumps to a new branch:</p>
<ul><li>The Jenkinsfile that makes use of the updatecli groovy library: <a href="https://code.in0rdr.ch/myheats/file/Jenkinsfile.html">https://code.in0rdr.ch/myheats/file/Jenkinsfile.html</a></li></ul>

<p>I did not even have to configure Updatecli a lot, because the <a href="https://www.updatecli.io/docs/core/autodiscovery">autodiscovery feature</a> automatically detects that this is a npm repository/project. The final version of my pipeline includes all the git/scm steps in the <code>updatecli.d/default.yaml</code> configuration file:</p>
<ul><li>Updatecli configuration file: <a href="https://code.in0rdr.ch/myheats/file/updatecli.d/default.yaml.html">https://code.in0rdr.ch/myheats/file/updatecli.d/default.yaml.html</a></li></ul>

<p>First I tried to perform the SCM/git steps in Jenkins <code>checkout</code> and <code>sh</code> steps. But I noticed it could be much sleeker by defining the SCM/git settings in the Updatecli config file directly. This way, updatecli takes care of the clone/checkout/push steps. Here the extract from my previous pipeline with the “manual git steps” for comparison:</p>

<pre><code class="language-java">// alternative approach I did not pursue any further
sh &#39;&#39;&#39;
git config --global user.name &#34;$GIT_AUTHOR_NAME&#34;
git config --global user.email &#34;$GIT_AUTHOR_EMAIL&#34;
&#39;&#39;&#39;

dir(&#34;myyheats.git-$BUILD_NUMBER&#34;) {
  // checkout update branch in new directory
  checkout scmGit(
      extensions: [localBranch(&#34;$branch&#34;)],
      userRemoteConfigs: [[url: &#39;https://git.in0rdr.ch/myheats.git&#39;]]
  )

  updatecli.run(&#39;apply&#39;)

  // commit changes
  sh &#39;&#39;&#39;
  git add -u
  git commit -m &#34;chore(updatecli-$BUILD_NUMBER): bump node modules&#34;
  git push -f -u origin &#34;$branch&#34;
  &#39;&#39;&#39;
}
</code></pre>

<p>I definitely like the <a href="https://code.in0rdr.ch/myheats/file/updatecli.d/default.yaml.html">updatecli configuration</a> better, since it keeps the actual pipeline tidy. Also, I like how you can use the <code>{{ requiredEnv &#34;GIT_PASSWORD&#34; }}</code> configuration in updatecli to read secrets from the environment. The Git credentials are sourced from OpenBao with Nomad workload identities.</p>

<p>I hope the post is helpful for anyone that would like to give updatecli a try or would like to configure a similar Jenkins pipeline.</p>

<div style="text-align:center; font-size: 0.8em">
<a href="https://write.in0rdr.ch/feed">🛜 RSS</a> | <a href="https://m.in0rdr.ch/in0rdr">🐘 Fediverse</a> | <a href="https://chat.in0rdr.ch/#/guest?join=p0c@conference.in0rdr.ch">💬 XMPP</a>
</div>
]]></content:encoded>
      <guid>https://write.in0rdr.ch/bump-npm-dependencies-with-updatecli</guid>
      <pubDate>Fri, 26 Jul 2024 20:50:19 +0000</pubDate>
    </item>
  </channel>
</rss>