Categories
Computers Machine Learning Thinking

Software Engineering when AI seems Everywhere

It’s pretty much impossible to miss the big push to use AI/LLM (Large Language Model) coding assistants for software engineers. Individual engineers, small and large companies seem to be going “all in” on this1. I’m generally wary of things that are this popular, as those often turn out more cargo cult than genuinely positive. So what’s a prudent thing to do as a software engineer? I believe the way ahead is a boring piece of advice, taht applies almost everywhere: instead of going easy, do more of the difficult stuff.

I genuinely think that putting the AI/LLM genie back into the bottle is unlikely (the same way as some people want the Internet, or smartphones, or cryptocurrencies put back into the bottle, which also not really gonna happen). That doesn’t mean that uncritical acceptance of the coding assistant tools should be the norm, au contraire, just like any tool, one needs to discover when they are fit for for the job, and when they are not. I have used GitHub CoPilot for a while, now digging into Cursor as it starts to conquer the workplace, and ChatGPT & Claude for individual coding questions. I don’t think it’s controversial to say that all these tools have their “strengths and weaknesses”, and that currently the more complex, more “production” the problem is, the further away it is from a proof-of-concept, the less likely these tools are of any help. They are help, they can be a large force multiplier, but they are big multiplier when one goes in with the least amount of input (knowledge, awailable time, reqirements for the result…)

Categories
Computers Machine Learning Programming

Adventures into Code Age with an LLM

It’s a relaxed Saturday afternoon, and I just remembered some nerdy plots I’ve seen online for various projects, depicting “code age” over time: how does your repository change over the months and years, how much code still survives from the beginning till now, etc… Something like this made by the author of curl:

Curl’s code age distribution

It looks interesting and informative. And even though I don’t have codebases that have been around this long, there are plenty of codebases around me that are fast moving, so something like a month (or in some cases week) level cohorts could be interesting.

One way to take this challenge on is to actually sit down and write the code. Another is to take a Large Language Model, say Claude and try to get that to make it. Of course the challenge is different in nature. For this case, let’s put myself in the shoes of someone who says

I am more interested in the results than the process, and want to get to the results quicker.

See how far we can get with this attitude, and where does it break down (probably no spoiler: it breaks down very quickly.).

Note on the selection of the model: I’ve chosen Claude just because generally I have good experience with it these days, and it can share generated artefacts (like the relevant Python code) which is nice. And it’s a short afternoon. :) Otherwise anything else could work as well, though surely with varying results.

Version 1

Let’s kick it off with a quick prompt.

Prompt: How would you generate a chart from a git repository to show the age of the code? That is when the code was written and how much of it survives over time?

Claude quickly picked it up and made me a Python script, which is nice (that being my day-to-day programming language). I guess that’s generally a good assumption these days if one does data analytics anyways (asking for another language is left for another experiment).