Categories
Computers Thinking

Installing Arch Linux as an incremental game

Once in a decade or so the time comes to install (or reinstall) my personal computer. This time the occasion is getting a new laptop, something more modern, something more capable. The time passed and the changes in technology since the last time I had to do this means that the installation is likely familiar, but still different – sometimes subtly, sometimes in unrecognisable ways.

Setting up my laptop is now in its 3rd day, and the experience made me think of incremental games. Incrementals1 are where you make some progress in gathering some resource, then have to reset your progress that gives you some small buff or gain. You then start again – but better. The cycle repeats and you might have the same experiences countless times, but after a while the game can be much faster or even unrecognisable due to the accumulated effects.

While I was installing Arch Linux this time I went through the following cycle:

  • Partitioned my disk and installed the system, but then couldn’t boot into it
  • Redo the partitioning, system, and bootloader install, now I can boot into it, but don’t have any network access
  • Redo the system config, now I have network access, but have to figure out what desktop environment am I going to run, and what do I need for that. A hot mess ensues after trying out all main desktop environments for kicks
  • Reinstall the system, cleaner, with my desktop environment of choice, now the high resolution environment makes everything either: tiny, gigantic, and/or blurred.
  • Sorted out most of the sizes of things, have network, have Bluetooth, but the sound and the media buttons don’t work
  • Sorted out sound, now the power management locks me out while watching a video….

… and so on. I know I have a few more rounds (few more “crunches”) to do get there, and the multilingual typing input setup will be a doozie with bopomofo, but I’m getting better and feeling better every round – that’s how a good incremental game goes. A power up there, a know-how of a semi-obscure, quality of life config here2.

Of course I could have gotten an Ubuntu or Fedora image, installed everything in way less time than writing about the experience, and could already be using it — but that’s a whole different game3.

The games we choose to play show our values. On an optimistic day I feel I value increased knowledge; on more realistic days it certainly seems like procrastination. Now let’s keep this in mind when I finally have high enough level of Linux buffs that they change the game mechanics and I get to do something with my computer.

  1. Keep away from Universal Paperclips and give a very wide berth to Antimatter Dimensions ↩︎
  2. Such as enabling tapping on the touch pad for a click, without needing to actually press…. ↩︎
  3. A game that might be won, while I’m not sure whether the one I’ve chosen has a win condition. ↩︎

Categories
Computers

The curious case of binfmt for x86 emulation for ARM Docker

Seemingly identical configurations, different results. When two methods for setting up x86 emulation on ARM showed the exact same system configuration but behaved completely differently in Docker, I began questioning my system administration knowledge and my sanity – and briefly contemplated a new career as a blacksmith.

This is a debugging tale for those working with containers, and a reminder that things aren’t always what they seem in Linux, all with a big pinch reminder to Read the Fine Manual, Always!

Categories
Computers Machine Learning Thinking

Software Engineering when AI seems Everywhere

It’s pretty much impossible to miss the big push to use AI/LLM (Large Language Model) coding assistants for software engineers. Individual engineers, small and large companies seem to be going “all in” on this1. I’m generally wary of things that are this popular, as those often turn out more cargo cult than genuinely positive. So what’s a prudent thing to do as a software engineer? I believe the way ahead is a boring piece of advice, taht applies almost everywhere: instead of going easy, do more of the difficult stuff.

I genuinely think that putting the AI/LLM genie back into the bottle is unlikely (the same way as some people want the Internet, or smartphones, or cryptocurrencies put back into the bottle, which also not really gonna happen). That doesn’t mean that uncritical acceptance of the coding assistant tools should be the norm, au contraire, just like any tool, one needs to discover when they are fit for for the job, and when they are not. I have used GitHub CoPilot for a while, now digging into Cursor as it starts to conquer the workplace, and ChatGPT & Claude for individual coding questions. I don’t think it’s controversial to say that all these tools have their “strengths and weaknesses”, and that currently the more complex, more “production” the problem is, the further away it is from a proof-of-concept, the less likely these tools are of any help. They are help, they can be a large force multiplier, but they are big multiplier when one goes in with the least amount of input (knowledge, awailable time, reqirements for the result…)

Categories
Computers Machine Learning Programming

Refreshing Airplane Tracking Software With and Without AI

A bit like last time this post is about a bit of programmer hubris, a bit of AI, a bit of failure… Though I also took away more lessons this time about software engineering, with or without fancy tools. This is about rabbit-holing myself into an old software project that I had very little knowhow to go on…

The story starts with me rediscovering a DVB-T receiver USB stick, that I had for probably close to decade. It’s been “barnacled” by time spent in the Taiwanese climate, so I wasn’t sure if it still works, but it’s such a versatile tool, that it was worth trying to revive it.

When these receivers function, they can receive digital TV (that’s the DVB-T), but also FM radio, DAB, and also they can act as software defined radio (SDR). This last thing makes them able to receive all kinds of transitions that are immediately quite high on the fun level, in particular airplane (ADS-B transmission) and ship (AIS) tracking. Naturally, there are websites to do both if you just want to see it (for example Flightradar24 and MarineTraffic, respectively, are popular aggregators for that data but there are tons), but doing your own data collection opens doors to all kinds of other use cases.

So on I go, trying to find, what software tools people use these days to use these receivers. Mine is a pretty simple one (find out everything about it by following the “RTL-SDR” keywords wherever you like to do that :) and so I remembered there were many tools. However also time passed, I forgot most that I knew, and also there were new projects coming and going.

ADSBox

While I was searching, I found the adsbox project, that was interesting both kinda working straight out of box for me, while it was also last updated some 9 years ago, so it’s an old code base that tickles my “let’s maintain all the things!” drive…

The GitHub repo information of ADSBox, last commits overall have been 9 years ago, and there are very few of them.

The tool is written mostly in C, while it also hosts its own server for a web interface, for listing flights, and (back in the day) supporting things like Google Maps and Google Earth.

The ADSBox interface showing a bunch of airplane information.
The adsbox plane listing interface.

Both the Google Maps and Earth parts seem completely: Maps has changed a lot since, as I also had to update my Taiwan WWII Map Overlays project over time too (the requirement of using API keys to even load the map, changes to the JavaScript API…). Earth I haven’t tried, but I’m thinking that went the way of the dodo on the the desktop?

Categories
Computers Machine Learning Programming

Adventures into Code Age with an LLM

It’s a relaxed Saturday afternoon, and I just remembered some nerdy plots I’ve seen online for various projects, depicting “code age” over time: how does your repository change over the months and years, how much code still survives from the beginning till now, etc… Something like this made by the author of curl:

Curl’s code age distribution

It looks interesting and informative. And even though I don’t have codebases that have been around this long, there are plenty of codebases around me that are fast moving, so something like a month (or in some cases week) level cohorts could be interesting.

One way to take this challenge on is to actually sit down and write the code. Another is to take a Large Language Model, say Claude and try to get that to make it. Of course the challenge is different in nature. For this case, let’s put myself in the shoes of someone who says

I am more interested in the results than the process, and want to get to the results quicker.

See how far we can get with this attitude, and where does it break down (probably no spoiler: it breaks down very quickly.).

Note on the selection of the model: I’ve chosen Claude just because generally I have good experience with it these days, and it can share generated artefacts (like the relevant Python code) which is nice. And it’s a short afternoon. :) Otherwise anything else could work as well, though surely with varying results.

Version 1

Let’s kick it off with a quick prompt.

Prompt: How would you generate a chart from a git repository to show the age of the code? That is when the code was written and how much of it survives over time?

Claude quickly picked it up and made me a Python script, which is nice (that being my day-to-day programming language). I guess that’s generally a good assumption these days if one does data analytics anyways (asking for another language is left for another experiment).