Categories
Computers

The curious case of binfmt for x86 emulation for ARM Docker

Seemingly identical configurations, different results. When two methods for setting up x86 emulation on ARM showed the exact same system configuration but behaved completely differently in Docker, I began questioning my system administration knowledge and my sanity – and briefly contemplated a new career as a blacksmith.

This is a debugging tale for those working with containers, and a reminder that things aren’t always what they seem in Linux, all with a big pinch reminder to Read the Fine Manual, Always!

ARM with Achiveteam v2

Recently I’ve got an email from a reader of the ARM images to help the Archive Team blogpost from years ago, asking me about refreshing that project to use again. There I was recompiling the ArchiveTeam’s Docker images to support ARM, and thus I was looking how things changed in the intervening time. I also got more lazy pragmatic since then, I was was wondering if the Archiveteam just made some ARM or multi-arch images as I believe(d) they should. That lead me to their FAQ entry about ARM images:

Can I run the Warrior on ARM or some other unusual architecture?

Not directly. We currently do not allow ARM (used on Raspberry Pi and M1 Macs) or other non-x86 architectures. This is because we have previously discovered questionable practices in the Wget archive-creating components and are not confident that they run correctly under (among other things) different endiannesses. […]

Set up QEMU with your Docker install and add –platform linux/amd64 to your docker run command.

This actually seems like a sensible thing – if they dug that deep that they’ve seen issues in wget, I’ve definitely been doing things naively before.

The guidance of installing QEMU seems sensible as well (we were doing a lot of those at balena), and it goes roughly like.

  1. install binfmt
  2. install QEMU with statically compiled binaries
  3. load those binaries to emulate the platforms you want with the F / fix_binary flag

For those unfamiliar, binfmt_misc is a Linux kernel feature that allows non-native binary formats to be recognized and passed to user space applications. It’s what makes it possible to run ARM binaries on x86 systems and vice versa through emulation. The various flags are how the actual behaviour of binfmt is adjusted (F, P, C, O)

Docker advised to use a image to set things up, that is, for example for the x86_64/amd64 platform like this:

docker run --privileged --rm tonistiigi/binfmt --install amd4

My Raspberry Pi is running ArchLinuxARM which installs systemd-binfmt to load the relevant emulation settings at boot time, which seemed handy: with the docker method I had to run that every time before I could run an emulated container, with systemd I would have thing ready by every time the time Docker is ready to run (ie. keeping the Archiveteam containers always on and restarting after reboot.) So I have a strong incentive to use the systemd-based approach instead of the docker run based one.

Now comes the kicker 🤯:

  • the docker installed binfmt setup worked and allowed to run linux/amd64 containers
  • systemd-binfmt initiated binfmt setup worked for the x86_64 binaries in the file system, but not in Docker where the binaries just failed to run
  • both setups had identical output when looking at the config in /proc/sys/fs/binfmt_misc

When Same’s Not the Same

To see whether emulation works, the tonistiigi/binfmt container can be invoked without any arguments and it shows the status. For example setting things up with docker would show:

$ docker run --privileged --rm tonistiigi/binfmt
{
  "supported": [
    "linux/arm64",
    "linux/amd64",
    "linux/amd64/v2",
    "linux/arm/v7",
    "linux/arm/v6"
  ],
  "emulators": [
    "qemu-x86_64"
  ]
}

Here the supported section shows amd64 as it should, and their test of running an amd64 image to check if the binaries are run has the expected output:

$ docker run --rm --platform linux/amd64 -t alpine uname -m
x86_64

Going back to the alternative, after uninstalling that emulatior I start up systemd-binfmtI can test the status again:

$ docker run --privileged --rm tonistiigi/binfmt
{
  "supported": [
    "linux/arm64",
    "linux/arm/v7",
    "linux/arm/v6"
  ],
  "emulators": [
[...snip...]
    "qemu-x86_64",
[...snip...]
  ]
}

This shows that while the emulator is installed, Docker doesn’t find that the linux/amd64 platform is supported, and this checks out with running the alpine image again as above:

$ docker run --rm --platform linux/amd64 -t alpine uname -m
exec /bin/uname: exec format error

Well, this doesn’t work.

The binfmt_misc docs in the Linux Kernel wiki do have plenty of info on the setup and use of the that emulation function. For example to check the configuration of the emulation setup, we can look at the contents of a file in /proc filesystem:

$ cat /proc/sys/fs/binfmt_misc/qemu-x86_64
enabled
interpreter /usr/bin/qemu-x86_64
flags: POCF
offset 0
magic 7f454c4602010100000000000000000002003e00
mask fffffffffffefe00fffffffffffffffffeffffff

This was the almost the same whether I the docker based setup or used systemd-binfmt with a slight difference: the flags bit is only PF when run with systemd-binfmt, and POCF when set things up with docker run. Even if the Docker docs are asking for the F flag, I wanted to make sure we are on equal footing, so I’ve tried to modify the QEMU setup to be the same. This means overriding the qemu-x86_64.conf that is shipped by default:

  • Copy the config from /usr/lib/binfmt.d/qemu-x86_64.conf to /etc/binfmt.d/qemu-x86_64.conf (make sure the file has the same name to ensure this new file overrides the one from the lib folder)
  • Edit the end of the line from :FP to :FPOC
  • restart systemd-binfmt

After this the output of the the runtime info in /proc/sys/fs/binfmt_misc/qemu-x86_64 was completely the same. Why’s the difference?

More debugging steps ensued:

More Debugging Ensued

I’ve read through the source code of tonistiigi/binfmt on GitHub and seen that it doesn’t do anything fancy, it’s quite clear implementation of the `binfmt_misc` usage docs and the same magic values as QEMU shipped on my system. Good that no surprise, but no hints of any difference

I’ve tried to replicate that process of setting up QEMU through translating it into Python and running, still the same

I’ve recompiled the binary on my system, and run it outside of docker: it worked the same way as the systemd-binfmt setup: x86_64 static binaries1 worked outside of Docker but not inside of it

A sort-of breakthrough came when I’ve tried out dbhi/qus Docker images, that promises “qemu-user-static (qus) and containers, non-invasive minimal working setups”, and can do the similar emulator & platform support setup with:

docker run --rm --privileged aptman/qus -s -- -p x86_64

It was a lot slower to run (coming back to this later), but worked like the charm, just like Docker’s own recommendation. However there was a difference in the outcome when I checked the runtime config info:

$ cat /proc/sys/fs/binfmt_misc/qemu-x86_64
enabled
interpreter /qus/bin/qemu-x86_64-static
flags: F
offset 0
magic 7f454c4602010100000000000000000002003e00
mask fffffffffffefe00fffffffffffffffffeffffff

It has just the apparently required F flag, but the interpreter points to /qus/bin/qemu-x86_64-static … which is not in the regular file system. Nevertheless alpine happily runs, just as my local static binaries.

How does this actually work, then?

Everything’s Illuminated

With this above, and with a better understanding what the docs say, we have everything in place to understand the all the behaviours above, things we had pointers throughout, but not enough experience to put them together:

So, the F flag was required by the Docker docs, what does that actually do?

F – fix binary

The usual behaviour of binfmt_misc is to spawn the binary lazily when the misc format file is invoked. However, this doesn’t work very well in the face of mount namespaces and changeroots, so the F mode opens the binary as soon as the emulation is installed and uses the opened image to spawn the emulator, meaning it is always available once installed, regardless of how the environment changes.

Because of this, if F is set, the interpreter entry in the runtime settings doesn’t mean the path of the interpreter it will be called, but where it was called at the time – ie. it’s irrelevant for the actual runtime.

The tonistiigi/binfmt image ships its own static-compiled qemu-* binarlies, as well as aptman/qus container gets the right ones at runtime (hence the slowness), and the interpreter path is the binary inside the container when the command is run. The binary is then kept in memory, and the container can go away, the interpreter path’s not refering anything that exists any longer.

Why does systemd-binfmt fail then? Well of course because it’s a dynamically linked binary:

$ file /usr/bin/qemu-x86_64
/usr/bin/qemu-x86_64: ELF 64-bit LSB pie executable, ARM aarch64, version 1 (SYSV), dynamically linked, interpreter /lib/ld-linux-aarch64.so.1, BuildID[sha1]=a4b8a93a4361be61dfa34a0eab40083325853839, for GNU/Linux 3.7.0, stripped

… and because it’s dynamically linked, even if the F flag makes it stay in memory, its lib dependencies aren’t, so when in run in Docker (which uses namespaces) it doesn’t have everything to run…

And of course, ArchLinux spells this out:

Note: At present, Arch does not offer a full-system mode and statically linked variant (neither officially nor via AUR), as this is usually not needed.

Yes, “as this is usually not needed”. :)

Updated Setup and Looking Forward

Sort of lobbying ArchLinux to have static QEMU2 what options do I have?

  • set up a systemd service to run the tonistiigi/binfmt container on startup (which is possible)
  • get some static QEMU binaries and override the settings that systemd-binfmt uses
  • switch to anothe Linux Distro that supports the Pi, the software I run, but also ships static QEMU builds

All three are suboptimal, potentially fragile, and the third is way too much work. Still the second one was kinda fine:

cd $(mktemp -d)
docker create --name="tmp_$$"  tonistiigi/binfmt
docker export tmp_$$ -o tonistiigi.tar.gz
docker rm tmp_$$
tar -xf tonistiigi.tar.gz --wildcards "*/qemu-x86_64"
# Copy along the binaries folder:
sudo cp usr/bin/qemu-x86_64 /usr/bin/qemu-x86_64-static

Then just like we’ve overridden the upstream qemu-x86_64.conf we do it again:

  • Copy the config from /usr/lib/binfmt.d/qemu-x86_64.conf to /etc/binfmt.d/qemu-x86_64.conf (make sure the file has the same name to ensure this new file overrides the one from the lib folder)
  • Edit the end of the line from :/usr/bin/qemu-x86_64:FP to :/usr/bin/qemu-x86_64-static:FPOC (that is updating the binary it points at and the flags for good measure too
  • As a bonus, can update the :qemu-x86_64: in the front too, say to :qemu-x86_64-static:, to change the display name of the emulator without affecting any of the functionality, it will just rename the entrin in /proc/sys/fs/binfmt_misc
  • restart systemd-binfmt

Then the check again:

$ cat /proc/sys/fs/binfmt_misc/qemu-x86_64-static
enabled
interpreter /usr/bin/qemu-x86_64-static
flags: POCF
offset 0
magic 7f454c4602010100000000000000000002003e00
mask fffffffffffefe00fffffffffffffffffeffffff

And the alpine-based checks work once more.

Lessons Learned

The details were all in plain sight, but not enough experience to piece these things together. The Docker-recommended image ships its own QEMU? What does that F flag actually do? Can you run binaries while you don’t have them anymore? Dynamic and static linking and the signs of their misbehaviours to provide hints… However this is coupled with confusion when expectations are broken (say the interpreter doesn’t have to refer to an actual file path that exists now), until I started to question my expectations. Also, just being a heavy user of Docker doesn’t mean I’m knowledgeable of the relevant kernel functionality, and probably I should be more…

This whole process underlined my previous thoughts on Software Engineering when AI seems Everywhere, as I did try to debug things by rubber ducking with Claude: this time the hallucinations were through the roof (a metric tonne of non-existent systemd funcionality, non-existent command line flags), definitely got me on a wild goose chase in a few cases. So even more care’s needed, maybe a version of Hofstadter’s Law:

Imreh’s Law3: LLMs are always more wrong than you expect, even when you take into account Imreh’s Law.

In the end, Don’t Panic, make theories and try to prove them, and talk with anyone who listens, even when they are wrong, and you are more likely to get there4.

  1. I’ve download static binaries from andrew-d/static-binaries, recommend strings as something that’s quick and simple to use ./strings /bin/sh | head for example, allowing fast iteration. ↩︎
  2. ArchLinux is x86 by default, for them it would be to emulate linux/arm64, linux/arm/v7, linux/arm/v6 images. For ArchLinux ARM it would be a different work the other direction. If only the main Arch would support ARM, it would be a happier world (even if even more complex). ↩︎
  3. Tongue-in-cheek, of course. ↩︎
  4. And with this we just rediscovered the Feynman Algorithm, I guess. ↩︎
Categories
Computers Machine Learning Thinking

Software Engineering when AI seems Everywhere

It’s pretty much impossible to miss the big push to use AI/LLM (Large Language Model) coding assistants for software engineers. Individual engineers, small and large companies seem to be going “all in” on this1. I’m generally wary of things that are this popular, as those often turn out more cargo cult than genuinely positive. So what’s a prudent thing to do as a software engineer? I believe the way ahead is a boring piece of advice, taht applies almost everywhere: instead of going easy, do more of the difficult stuff.

I genuinely think that putting the AI/LLM genie back into the bottle is unlikely (the same way as some people want the Internet, or smartphones, or cryptocurrencies put back into the bottle, which also not really gonna happen). That doesn’t mean that uncritical acceptance of the coding assistant tools should be the norm, au contraire, just like any tool, one needs to discover when they are fit for for the job, and when they are not. I have used GitHub CoPilot for a while, now digging into Cursor as it starts to conquer the workplace, and ChatGPT & Claude for individual coding questions. I don’t think it’s controversial to say that all these tools have their “strengths and weaknesses”, and that currently the more complex, more “production” the problem is, the further away it is from a proof-of-concept, the less likely these tools are of any help. They are help, they can be a large force multiplier, but they are big multiplier when one goes in with the least amount of input (knowledge, awailable time, reqirements for the result…)

Categories
Computers Machine Learning Programming

Refreshing Airplane Tracking Software With and Without AI

A bit like last time this post is about a bit of programmer hubris, a bit of AI, a bit of failure… Though I also took away more lessons this time about software engineering, with or without fancy tools. This is about rabbit-holing myself into an old software project that I had very little knowhow to go on…

The story starts with me rediscovering a DVB-T receiver USB stick, that I had for probably close to decade. It’s been “barnacled” by time spent in the Taiwanese climate, so I wasn’t sure if it still works, but it’s such a versatile tool, that it was worth trying to revive it.

When these receivers function, they can receive digital TV (that’s the DVB-T), but also FM radio, DAB, and also they can act as software defined radio (SDR). This last thing makes them able to receive all kinds of transitions that are immediately quite high on the fun level, in particular airplane (ADS-B transmission) and ship (AIS) tracking. Naturally, there are websites to do both if you just want to see it (for example Flightradar24 and MarineTraffic, respectively, are popular aggregators for that data but there are tons), but doing your own data collection opens doors to all kinds of other use cases.

So on I go, trying to find, what software tools people use these days to use these receivers. Mine is a pretty simple one (find out everything about it by following the “RTL-SDR” keywords wherever you like to do that :) and so I remembered there were many tools. However also time passed, I forgot most that I knew, and also there were new projects coming and going.

ADSBox

While I was searching, I found the adsbox project, that was interesting both kinda working straight out of box for me, while it was also last updated some 9 years ago, so it’s an old code base that tickles my “let’s maintain all the things!” drive…

The GitHub repo information of ADSBox, last commits overall have been 9 years ago, and there are very few of them.

The tool is written mostly in C, while it also hosts its own server for a web interface, for listing flights, and (back in the day) supporting things like Google Maps and Google Earth.

The ADSBox interface showing a bunch of airplane information.
The adsbox plane listing interface.

Both the Google Maps and Earth parts seem completely: Maps has changed a lot since, as I also had to update my Taiwan WWII Map Overlays project over time too (the requirement of using API keys to even load the map, changes to the JavaScript API…). Earth I haven’t tried, but I’m thinking that went the way of the dodo on the the desktop?

Categories
Computers Machine Learning Programming

Adventures into Code Age with an LLM

It’s a relaxed Saturday afternoon, and I just remembered some nerdy plots I’ve seen online for various projects, depicting “code age” over time: how does your repository change over the months and years, how much code still survives from the beginning till now, etc… Something like this made by the author of curl:

Curl’s code age distribution

It looks interesting and informative. And even though I don’t have codebases that have been around this long, there are plenty of codebases around me that are fast moving, so something like a month (or in some cases week) level cohorts could be interesting.

One way to take this challenge on is to actually sit down and write the code. Another is to take a Large Language Model, say Claude and try to get that to make it. Of course the challenge is different in nature. For this case, let’s put myself in the shoes of someone who says

I am more interested in the results than the process, and want to get to the results quicker.

See how far we can get with this attitude, and where does it break down (probably no spoiler: it breaks down very quickly.).

Note on the selection of the model: I’ve chosen Claude just because generally I have good experience with it these days, and it can share generated artefacts (like the relevant Python code) which is nice. And it’s a short afternoon. :) Otherwise anything else could work as well, though surely with varying results.

Version 1

Let’s kick it off with a quick prompt.

Prompt: How would you generate a chart from a git repository to show the age of the code? That is when the code was written and how much of it survives over time?

Claude quickly picked it up and made me a Python script, which is nice (that being my day-to-day programming language). I guess that’s generally a good assumption these days if one does data analytics anyways (asking for another language is left for another experiment).

Categories
Computers

Git login and commit signing with security

Doing software engineering (well-ish) is pretty hard to imagine without working in version control, which most of the time means git. In a practical setup of git there’s the question of how do I get access to the code it stores — how do I “check things out”? — and optionally how can others verify that it was indeed me who did the changes — how do I “sign” my commits? Recently I’ve changed my mind about what’s a good combination for these two aspects, and what tools am I using to do them.

Access Options

In broad terms git repositories can be checked out either though the HTTP protocol, or through the SSH protocol. Both have pros and cons.

Having two-factor authentication (2FA) made the HTTP access more secure but also more setup (no more direct username/password usage, rather needing to create extra access keys used in place of passwords). Credentials were still in plain text (as far as I know) on the machine in some git config files.

The SSH setup was in some sense more practical one (creating keys on your own machine, and just passing in the public key portion), though there were still secrets in plain text on my machine (as I don’t think the majority of people used password-protected SSH keys, due to their user experience). This is what I’ve used for years: add a new SSH key for a new machine that I’m working on, check code out through ssh+git, and work away.

When I’ve recently came across the git-credential-manager tool that supposed to make HTTP access nicer (for various git servers and services), and get rid of plain text secrets. Of course this is not the first or only one of the tools that does git credentials, but being made by GitHub, it had some more clout. This made me re-evaulate what options do I have for SSH as well for similar security improvements.

Thus I’ve found that both 1Password and KeePassXC (the two main password managers I use) have ssh-agent integration, and thus can store SSH keys + give access to them as needed. No more plain text (or password protected) private keys on disk with these either!

Now it seems there are two good, new options to evaulate, and for the full picture I looked at how the code signing options work in this context as well.