Categories
Computers

The curious case of binfmt for x86 emulation for ARM Docker

Seemingly identical configurations, different results. When two methods for setting up x86 emulation on ARM showed the exact same system configuration but behaved completely differently in Docker, I began questioning my system administration knowledge and my sanity – and briefly contemplated a new career as a blacksmith.

This is a debugging tale for those working with containers, and a reminder that things aren’t always what they seem in Linux, all with a big pinch reminder to Read the Fine Manual, Always!

ARM with Achiveteam v2

Recently I’ve got an email from a reader of the ARM images to help the Archive Team blogpost from years ago, asking me about refreshing that project to use again. There I was recompiling the ArchiveTeam’s Docker images to support ARM, and thus I was looking how things changed in the intervening time. I also got more lazy pragmatic since then, I was was wondering if the Archiveteam just made some ARM or multi-arch images as I believe(d) they should. That lead me to their FAQ entry about ARM images:

Can I run the Warrior on ARM or some other unusual architecture?

Not directly. We currently do not allow ARM (used on Raspberry Pi and M1 Macs) or other non-x86 architectures. This is because we have previously discovered questionable practices in the Wget archive-creating components and are not confident that they run correctly under (among other things) different endiannesses. […]

Set up QEMU with your Docker install and add –platform linux/amd64 to your docker run command.

This actually seems like a sensible thing – if they dug that deep that they’ve seen issues in wget, I’ve definitely been doing things naively before.

The guidance of installing QEMU seems sensible as well (we were doing a lot of those at balena), and it goes roughly like.

  1. install binfmt
  2. install QEMU with statically compiled binaries
  3. load those binaries to emulate the platforms you want with the F / fix_binary flag

For those unfamiliar, binfmt_misc is a Linux kernel feature that allows non-native binary formats to be recognized and passed to user space applications. It’s what makes it possible to run ARM binaries on x86 systems and vice versa through emulation. The various flags are how the actual behaviour of binfmt is adjusted (F, P, C, O)

Docker advised to use a image to set things up, that is, for example for the x86_64/amd64 platform like this:

docker run --privileged --rm tonistiigi/binfmt --install amd4

My Raspberry Pi is running ArchLinuxARM which installs systemd-binfmt to load the relevant emulation settings at boot time, which seemed handy: with the docker method I had to run that every time before I could run an emulated container, with systemd I would have thing ready by every time the time Docker is ready to run (ie. keeping the Archiveteam containers always on and restarting after reboot.) So I have a strong incentive to use the systemd-based approach instead of the docker run based one.

Now comes the kicker 🤯:

  • the docker installed binfmt setup worked and allowed to run linux/amd64 containers
  • systemd-binfmt initiated binfmt setup worked for the x86_64 binaries in the file system, but not in Docker where the binaries just failed to run
  • both setups had identical output when looking at the config in /proc/sys/fs/binfmt_misc

When Same’s Not the Same

To see whether emulation works, the tonistiigi/binfmt container can be invoked without any arguments and it shows the status. For example setting things up with docker would show:

$ docker run --privileged --rm tonistiigi/binfmt
{
  "supported": [
    "linux/arm64",
    "linux/amd64",
    "linux/amd64/v2",
    "linux/arm/v7",
    "linux/arm/v6"
  ],
  "emulators": [
    "qemu-x86_64"
  ]
}

Here the supported section shows amd64 as it should, and their test of running an amd64 image to check if the binaries are run has the expected output:

$ docker run --rm --platform linux/amd64 -t alpine uname -m
x86_64

Going back to the alternative, after uninstalling that emulatior I start up systemd-binfmtI can test the status again:

$ docker run --privileged --rm tonistiigi/binfmt
{
  "supported": [
    "linux/arm64",
    "linux/arm/v7",
    "linux/arm/v6"
  ],
  "emulators": [
[...snip...]
    "qemu-x86_64",
[...snip...]
  ]
}

This shows that while the emulator is installed, Docker doesn’t find that the linux/amd64 platform is supported, and this checks out with running the alpine image again as above:

$ docker run --rm --platform linux/amd64 -t alpine uname -m
exec /bin/uname: exec format error

Well, this doesn’t work.

The binfmt_misc docs in the Linux Kernel wiki do have plenty of info on the setup and use of the that emulation function. For example to check the configuration of the emulation setup, we can look at the contents of a file in /proc filesystem:

$ cat /proc/sys/fs/binfmt_misc/qemu-x86_64
enabled
interpreter /usr/bin/qemu-x86_64
flags: POCF
offset 0
magic 7f454c4602010100000000000000000002003e00
mask fffffffffffefe00fffffffffffffffffeffffff

This was the almost the same whether I the docker based setup or used systemd-binfmt with a slight difference: the flags bit is only PF when run with systemd-binfmt, and POCF when set things up with docker run. Even if the Docker docs are asking for the F flag, I wanted to make sure we are on equal footing, so I’ve tried to modify the QEMU setup to be the same. This means overriding the qemu-x86_64.conf that is shipped by default:

  • Copy the config from /usr/lib/binfmt.d/qemu-x86_64.conf to /etc/binfmt.d/qemu-x86_64.conf (make sure the file has the same name to ensure this new file overrides the one from the lib folder)
  • Edit the end of the line from :FP to :FPOC
  • restart systemd-binfmt

After this the output of the the runtime info in /proc/sys/fs/binfmt_misc/qemu-x86_64 was completely the same. Why’s the difference?

More debugging steps ensued:

More Debugging Ensued

I’ve read through the source code of tonistiigi/binfmt on GitHub and seen that it doesn’t do anything fancy, it’s quite clear implementation of the `binfmt_misc` usage docs and the same magic values as QEMU shipped on my system. Good that no surprise, but no hints of any difference

I’ve tried to replicate that process of setting up QEMU through translating it into Python and running, still the same

I’ve recompiled the binary on my system, and run it outside of docker: it worked the same way as the systemd-binfmt setup: x86_64 static binaries1 worked outside of Docker but not inside of it

A sort-of breakthrough came when I’ve tried out dbhi/qus Docker images, that promises “qemu-user-static (qus) and containers, non-invasive minimal working setups”, and can do the similar emulator & platform support setup with:

docker run --rm --privileged aptman/qus -s -- -p x86_64

It was a lot slower to run (coming back to this later), but worked like the charm, just like Docker’s own recommendation. However there was a difference in the outcome when I checked the runtime config info:

$ cat /proc/sys/fs/binfmt_misc/qemu-x86_64
enabled
interpreter /qus/bin/qemu-x86_64-static
flags: F
offset 0
magic 7f454c4602010100000000000000000002003e00
mask fffffffffffefe00fffffffffffffffffeffffff

It has just the apparently required F flag, but the interpreter points to /qus/bin/qemu-x86_64-static … which is not in the regular file system. Nevertheless alpine happily runs, just as my local static binaries.

How does this actually work, then?

Everything’s Illuminated

With this above, and with a better understanding what the docs say, we have everything in place to understand the all the behaviours above, things we had pointers throughout, but not enough experience to put them together:

So, the F flag was required by the Docker docs, what does that actually do?

F – fix binary

The usual behaviour of binfmt_misc is to spawn the binary lazily when the misc format file is invoked. However, this doesn’t work very well in the face of mount namespaces and changeroots, so the F mode opens the binary as soon as the emulation is installed and uses the opened image to spawn the emulator, meaning it is always available once installed, regardless of how the environment changes.

Because of this, if F is set, the interpreter entry in the runtime settings doesn’t mean the path of the interpreter it will be called, but where it was called at the time – ie. it’s irrelevant for the actual runtime.

The tonistiigi/binfmt image ships its own static-compiled qemu-* binarlies, as well as aptman/qus container gets the right ones at runtime (hence the slowness), and the interpreter path is the binary inside the container when the command is run. The binary is then kept in memory, and the container can go away, the interpreter path’s not refering anything that exists any longer.

Why does systemd-binfmt fail then? Well of course because it’s a dynamically linked binary:

$ file /usr/bin/qemu-x86_64
/usr/bin/qemu-x86_64: ELF 64-bit LSB pie executable, ARM aarch64, version 1 (SYSV), dynamically linked, interpreter /lib/ld-linux-aarch64.so.1, BuildID[sha1]=a4b8a93a4361be61dfa34a0eab40083325853839, for GNU/Linux 3.7.0, stripped

… and because it’s dynamically linked, even if the F flag makes it stay in memory, its lib dependencies aren’t, so when in run in Docker (which uses namespaces) it doesn’t have everything to run…

And of course, ArchLinux spells this out:

Note: At present, Arch does not offer a full-system mode and statically linked variant (neither officially nor via AUR), as this is usually not needed.

Yes, “as this is usually not needed”. :)

Updated Setup and Looking Forward

Sort of lobbying ArchLinux to have static QEMU2 what options do I have?

  • set up a systemd service to run the tonistiigi/binfmt container on startup (which is possible)
  • get some static QEMU binaries and override the settings that systemd-binfmt uses
  • switch to anothe Linux Distro that supports the Pi, the software I run, but also ships static QEMU builds

All three are suboptimal, potentially fragile, and the third is way too much work. Still the second one was kinda fine:

cd $(mktemp -d)
docker create --name="tmp_$$"  tonistiigi/binfmt
docker export tmp_$$ -o tonistiigi.tar.gz
docker rm tmp_$$
tar -xf tonistiigi.tar.gz --wildcards "*/qemu-x86_64"
# Copy along the binaries folder:
sudo cp usr/bin/qemu-x86_64 /usr/bin/qemu-x86_64-static

Then just like we’ve overridden the upstream qemu-x86_64.conf we do it again:

  • Copy the config from /usr/lib/binfmt.d/qemu-x86_64.conf to /etc/binfmt.d/qemu-x86_64.conf (make sure the file has the same name to ensure this new file overrides the one from the lib folder)
  • Edit the end of the line from :/usr/bin/qemu-x86_64:FP to :/usr/bin/qemu-x86_64-static:FPOC (that is updating the binary it points at and the flags for good measure too
  • As a bonus, can update the :qemu-x86_64: in the front too, say to :qemu-x86_64-static:, to change the display name of the emulator without affecting any of the functionality, it will just rename the entrin in /proc/sys/fs/binfmt_misc
  • restart systemd-binfmt

Then the check again:

$ cat /proc/sys/fs/binfmt_misc/qemu-x86_64-static
enabled
interpreter /usr/bin/qemu-x86_64-static
flags: POCF
offset 0
magic 7f454c4602010100000000000000000002003e00
mask fffffffffffefe00fffffffffffffffffeffffff

And the alpine-based checks work once more.

Lessons Learned

The details were all in plain sight, but not enough experience to piece these things together. The Docker-recommended image ships its own QEMU? What does that F flag actually do? Can you run binaries while you don’t have them anymore? Dynamic and static linking and the signs of their misbehaviours to provide hints… However this is coupled with confusion when expectations are broken (say the interpreter doesn’t have to refer to an actual file path that exists now), until I started to question my expectations. Also, just being a heavy user of Docker doesn’t mean I’m knowledgeable of the relevant kernel functionality, and probably I should be more…

This whole process underlined my previous thoughts on Software Engineering when AI seems Everywhere, as I did try to debug things by rubber ducking with Claude: this time the hallucinations were through the roof (a metric tonne of non-existent systemd funcionality, non-existent command line flags), definitely got me on a wild goose chase in a few cases. So even more care’s needed, maybe a version of Hofstadter’s Law:

Imreh’s Law3: LLMs are always more wrong than you expect, even when you take into account Imreh’s Law.

In the end, Don’t Panic, make theories and try to prove them, and talk with anyone who listens, even when they are wrong, and you are more likely to get there4.

  1. I’ve download static binaries from andrew-d/static-binaries, recommend strings as something that’s quick and simple to use ./strings /bin/sh | head for example, allowing fast iteration. ↩︎
  2. ArchLinux is x86 by default, for them it would be to emulate linux/arm64, linux/arm/v7, linux/arm/v6 images. For ArchLinux ARM it would be a different work the other direction. If only the main Arch would support ARM, it would be a happier world (even if even more complex). ↩︎
  3. Tongue-in-cheek, of course. ↩︎
  4. And with this we just rediscovered the Feynman Algorithm, I guess. ↩︎
Categories
Admin Computers

ZFS on a Raspberry Pi

I have a little home server, just like mike many other geeks / nerds / programmers / technical people… It can be both useful, a learning experience, as well as a real chore; most of the time the balance is shifting between these two ends. Today I’m taking notes here on one aspect of that home server that is widely swing between those two use cases.

When I say I have a home server, that might be too generous description of the status quo: I have a pretty banged up Raspberry Pi 3B. It’s running ArchLinux ARM, the 64-bit, AAarch64 version, looking a bit more retro on the hardware front while pushing for more modernity on the software side – a mix that I find fun.

There are a handful of services running on the device — not that many, mostly limited by it’s *gulp* 1GB memory; plenty of things I’d love to run, doesn’t well co-locate in such a tiny compartment. Besides the memory, it’s also limited by storage: the Raspberry Pi runs off an SD card, and those are both fragile, and limited in size. If one wants to run a home file server, say using a handful of other SD cards lying around, to expand the available storage, that will be awkward very soon. To make that task less awkward (or replace one kind of awkward with a more interesting one), I’ve set out to set up a ZFS storage pool, using OpenZFS.

The idea

Why ZFS? In big part, to be able to credibly answer that question.

But with a single, more concrete reason: being able to build a more solid and expandable storage unit. ZFS cancombine different storage units

  • in a way that combats data errors, e.g. mirroring: this addresses SD cards fragility
  • in a way that data can expand across all of them in a single file system: this addresses the SD cards size limitations

This sounds great in theory and after a bit of trial-and-error, I’ve made the following setup, relying on dynamic kernel modules for support for flexibility, and a hodgepodge of drives at hand for the storage

The file system supports needs is provided by the zfs-dkms package dynamic kernel module (DKMS), which means the kernel module required for being able to manage that file system is recompiled for each new Linux kernel version as it is updated. This is handy in theory, as I can use the main kernel packages provided by the ArchLinux ARM team.

For storage, I’ve started off with two SD cards in mirror mode (going for data integrity first). Later I’ve found — and invested in — some large capacity USB sticks that bumped the storage size quite a bit. With these, the currentl ZFS pool looks like this:

Terminal screenshot of the 'zpool status' command.

It already saved me — or rather my data — once where an SD card was acting up, though that’s par for the course. One very large benefit is that the main system card is being used less, so hopefully will last longer.

The complications

Of course, it’s never this easy… With non-mainline kernel modules and with DKMS, every update is a bit of a gamble, that can suddenly not pay off. That’s exactly what happened last year, when suddenly the module didn’t compile anymore on a new kernel version, and thus all that storage was sitting dump and inaccessible. After digging into the issue, it down to:

  1. the OpenZFS project being under Common Development and Distribution License (CDDL)
  2. the Linux kernel deliberately breaking non-GPL licensed code by starting to withold certain floating point capabilities, because “this is not expected to be disruptive to existing users”.

This wasn’t great, as user being between pretty much a rock & a hard place, even if this is a hobby and not strictly speaking a production use case on my side.

Nonetheless, it worked by downgrading to a working version and skipping updates to the kernel packages.

Then based on a suggestion, patching the zfs-dkms package (rewriting the license entry in the META file) to make it look like it’s a GPL-licensed module — which is fair game for one doing on their own machine. This is hacky, or let’s call it pragmatic.

--- META.prev   2024-02-28 08:42:21.526641154 +0800
+++ META        2024-02-28 08:42:36.435569959 +0800
@@ -4,7 +4,7 @@
 Version:       2.2.3
 Release:       1
 Release-Tags:  relext
-License:       CDDL
+License:       GPL
 Author:        OpenZFS
 Linux-Maximum: 6.7
 Linux-Minimum: 3.10

Now, with the current 2.2.3 version, it seems like there’s an official fix-slash-workaround for being able to get the module to compile, even if it’s not a full fix. From the linked merge request message I’m not fully convinced that this is not a fragile status quo, but it’s at least front of mind – good going for wider ARM hardware usage that brings out people’s willingness to fix things!

Future development

Some while back, while working at an IoT software deploument & management company, I had a lot of interesting hardware at hand, naturally, to build things with (or wrestle with…). Nowadays I have things I best describe as spare parts, and thus loads of thingss are more fragile than they need to be, as well as gosh-it-takes-a-long-time to compile things on a Raspberry Pi 3 – making every kernel update some half-an-hour longer!

Likely the best move would be to upgrade to a (much more powerful) Raspberry Pi 5 and use an external NVMe drive, where I’d have much less need for ZFS, at least for the original reasons. It would likely be still useful for other aspects (such as snapshotting, or sending/receiving the drive data, compression, deduplication, etc…), changing the learning path away from multi-device support to the file system features.

If I wanted to use more storage in the existing system, I could also get rid of the mirrored SD cards and just just 4 large USB sticks (maybe in a RAIDZ setup), a poor-man’s NAS, I guess. Though there I’d worry a bit about using the sticks with the same sizes for this to work (unlike pooling, which has no same-size requirements), given the differences in the supposedly same sized products from different companies (likely locking me into a having the same brand and model across the board).

I also feel like I’m not using ZFS to its full potential. If I know enough just to be dangerous… maybe that’s the generalists natural habitat?

Categories
Admin

Continuous integration testing of Arch User Repository packages

I maintain a couple of ArchLinux user-contributed packages on the Arch User Repository (AUR), and over time I’ve built out a bit of infrastructure around that to make that maintenance easier (and hopefully the results better). The core of it is automated building of packages in Continuous Integration, which catches a number of issues which otherwise would be more difficult.

This write-up will go through the entire packaging process to make it easily reproducible.

Categories
Computers Lab

Dual Satellite NTP server with Navspark

A friend from NIST recently told me about a Raspberry Pi Stratum-1 NTP server project, and that reminded me of the experiments I did with the Navspark dual GPS+Beidou receiver module. Navspark is a small, Arduino-compatible module that besides GPS can also receive data from China’s Beidou 北斗 satellite navigation system , that is currently being built. I thought it would be fun to build a Beidou-powered Stratum-1 NTP server to see how does it compare to GPS.

Hardware

To have a really good really good, satellite-powered reference clock, I have to have access to a 1-pulse-per-second (1PPS) signal from the receiver. The pure USB-connected receivers don’t really seem to do that yet (looks like plenty of opportunities there!), instead I have to use separate hardware for it.

The Navspark module has a 1PPS pin (GPO3 below), and the only other pin I’ll really need is a serial pin to receive the NMEA stream of the satellite lock data (TXD1 below).

Detailed Navspark pinout with pin functions
Navspark pinout from the User Manual