Categories
Computers Machine Learning Thinking

Software Engineering when AI seems Everywhere

It’s pretty much impossible to miss the big push to use AI/LLM (Large Language Model) coding assistants for software engineers. Individual engineers, small and large companies seem to be going “all in” on this1. I’m generally wary of things that are this popular, as those often turn out more cargo cult than genuinely positive. So what’s a prudent thing to do as a software engineer? I believe the way ahead is a boring piece of advice, taht applies almost everywhere: instead of going easy, do more of the difficult stuff.

I genuinely think that putting the AI/LLM genie back into the bottle is unlikely (the same way as some people want the Internet, or smartphones, or cryptocurrencies put back into the bottle, which also not really gonna happen). That doesn’t mean that uncritical acceptance of the coding assistant tools should be the norm, au contraire, just like any tool, one needs to discover when they are fit for for the job, and when they are not. I have used GitHub CoPilot for a while, now digging into Cursor as it starts to conquer the workplace, and ChatGPT & Claude for individual coding questions. I don’t think it’s controversial to say that all these tools have their “strengths and weaknesses”, and that currently the more complex, more “production” the problem is, the further away it is from a proof-of-concept, the less likely these tools are of any help. They are help, they can be a large force multiplier, but they are big multiplier when one goes in with the least amount of input (knowledge, awailable time, reqirements for the result…)

On the day-to-day use I’ve ended up usually in one of these cases:

  • the suggested results were wrong: clearly when I was lucky, and subtly when I wasn’t;
  • when the results were correct I had to spend tuning the prompts, questions, and guidance until the invested effort was similar to doing the work myself;
  • when the results were correct and they weren’t too much work they addressed simple stuff (such as writing structured documentation), in which case the tools indeed got me into a nice flow state;

The proportion of the above cases was about 70/20/10%, respectively2.

This experience made me pause and think about what influences this status quo in the areas that I can control? For example I cannot (really) control how capable the various models are, they will be improving over time for sure. On the other hand, is there something that I could do differently to have better outcomes?

Stepping all the way back, the universal rule of thumb of garbage in, garbage out hasn’t failed me yet, so working on better quality input seems like a good place to start. And better input only happens when I’m better at the things I’m doing. Wherever I look, that getting better is coming from focused, significant, and directed effort.3

What can I do be more effortful – in the right way? What are those software engineer activities that require effort, but not toil? Looking back my past experience, so far I’ve found three main areas:

Work on debugging skills. How to read code, how to approach a brand new software system and build up model of how it works? Programming is theory building, and so the better (more correctly, faster, with more insights) you can build your theories, the better the ultimate outcomes will be.

Work on testing skills: once you know how things work, you can know better how they can go wrong, and better, you can prevent things going wrong. However not all tests are created equal: coding assistants these days can create tests very easily (tests often need a lot of boilerplate, and boilerplate is easy to generalise, exactly the coding assistants bread & butter). They seem less good figuring out what tests really move the neadle increase our certainty that things work well – and those are the tests that are really needed.

Work on software architecture skills: making something complex and complicated is less work, than making something simple and elegant. Simplicity and elegance is bot just for its own sake. Simple is easier to reason about, easier to maintain and evolve, more likely to be correct. Can coding assistants do this? So far they tend towards verbocity which breeds complexity. They are very local and thus much less holistic. They learn from the entire Internet, but that also means that they have to have extremely good attention to find what’s relevant…

These three are programming language, business context, job role agnosic, and I find it difficult to imagine an engineer role that doesn’t need them. I guess as long as humans do software engineering, these shall serve people right.

At the end of the day, instead of speeding up with coding assistants, I’ll slow down. Instead of the volume, I try focusing on quality. The assistants still have potential space there: rubber ducking, as an advanced (even if flawed) search engine, and to code the parts that have less value for a given set of goal. There machines can do the work so people have time to think4.

  1. It might be my own bubble, being a machine learning engineer by the day. ↩︎
  2. With better models I’d guess the cases will be the same, but the proportions will shift. ↩︎
  3. Let this be physical such as exercise, or mental like it’s advocated in How to Read a Book. ↩︎
  4. Rather than the other way around. ↩︎

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.