Alex Dixon

Gamedev, graphics, open source. Shuffling bytes and swizzling vectors since 2004...

AI coding tools are powerful but we mustn’t let our own skills atrophy

20 March 2026

It started with the “don’t get left behind” brigade, somehow they managed to convince me that I was now missing out on developing critical new skills. People who hadn’t put in the hours to learn to code in the first place were now at the top of the field and leaving the rest of us behind, who have put in multiple decades and tens of thousands of hours of effort to learn our craft. They want their PR’s merged upstream to get credit on GitHub for code they didn’t write or even understand, flooding the pull request system to breaking point. I tried to ignore it but the chatter was incessant and I had to take a look for myself.

I didn’t want the AI coding tools to be good. After all, I have dedicated a large part of my life to coding. It is much more than a job, it’s a hobby, a love, part of who I am. But coding is dead, it’s going to be over in 6 months. I’ve been having to hear this daily for the last 3 years and it is exhausting and inescapable, everywhere I look I see it. “The models will get better” they say, “this is the worst it will ever be”… and so it goes on and on and on. Waking up every day reading articles about how your job is going to be replaced is not good for mental health, when will we ever hear the end of the constant hype train?

I don’t want the AI hyperscaler tech bros to succeed. With the enshitification of technology steadily underway, I don’t trust a single one of them. Now they have plagiarised the entire sum of human knowledge they want to kick down the ladder, sue the competition, close it all off, we were here first. These advancements should be exciting, but knowing capitalism will certainly try its best to eventually squeeze every last penny out of it makes it hard for me to appreciate the moment we are living in.

That being said, these tools are impressive, sometimes they amaze me, sometimes they frustrate me, and I dislike the landscape surrounding it all at the same time. The dichotomy is hard to explain, I didn’t want to get hooked on wanting to use them, but after just one session where my friend first showed me Claude code, I was dreaming about it, the thoughts of using AI infiltrated my mind. I was somehow addicted already.

I work with ML researchers, and have dipped my toe in there myself a little. A while ago a colleague explained LLMs to me in a way that stuck: they’re very good at interpolation. Words are represented as vectors in a high-dimensional space, and the model learns patterns between them. Based on this prior insight I started with tasks that I thought would be simple and likely a success: adding new records store scrapers to my music app; filling out some missing parts of a graphics engine backend; and clearing some long overdue technical debt. Even though I expected these tasks to be trivial, I was still impressed with how Claude worked, how it asked questions to clarify details, and how it seemed to understand exactly what I wanted.

I was sucked in even further. I paid for a subscription despite saying I hated big tech and would not pay them a penny… What a hypocrite.

It’s hard to measure productivity, but I can say that AI has given me a motivational boost to get back into projects. Sometimes knowing tech debt exists in a project is enough to slow me down. Asking AI to clean that up while I do the interesting stuff feels like a weight lifted off my shoulders. Coming back to the code base after a while, things can feel unfamiliar and AI is great at helping to ease you back into things as well, explaining the current state of something that was a work in progress. I was really amazed at issues found when I asked Claude to review some of my code bases. It found subtle bugs from just looking at the code, I fixed them and together added tests to catch those cases. The code review was so useful for my Rust maths library that I decided to publish it as version 1.0.0. These aspects of AI coding augment my skills, make light work of the boring stuff and also help me make sure every detail is covered meticulously.

The ease of development is powerful but it can also be a double edged sword. Since it is so easy to ask to fix something or add a new feature you can easily end up with a lot of features and a lot of code. More code is not better, more code is bad, it is more to maintain and it increases the complexity of any future work. Being able to pile on features without thought dilutes our ability to discern the most impactful meaningful changes. Good software engineers tend to naturally optimize in this regard, because it means less work. Less work is good and laziness can make for smarter decisions.

The veneer can easily peel away when working on complex and abstract problems. I started to get into a rut with difficult tasks. Claude was struggling, I didn’t like the code it generated and it was taking a lot of time to not just read the code, but became detached from what I was trying to achieve. I started to realise that using an LLM to code completely changes our relationship to the code. This changing relationship has become prominent in a series I was livestreaming on YouTube entitled Sloppy Gamedev. The aim of this is to try and make a game using AI. I originally attempted to purely vibe code it, during the first session I realised Claude wasn’t going to be able to make a game on vibes alone. The first attempt was pretty terrible, lots of hardcoded values so it was not extendible or reusable and lots of bugs.

So you could say this is a skill issue, I need better prompts or better context, and if you had all of that information ahead of time, maybe a purely vibe coded game would be possible. Claude needs a lot of detail and I also need to figure out and understand what those details should be. With things like gamedev a lot of the problems require iteration and this is where I think things start to break down. What I need is a more collaborative relationship between my code, LLM code, and understanding of the architecture we need. I found this difficult at first because I did not like the LLM generated code, I did not want to edit it myself and felt alienated.

Sometimes the code it generates is just very anti-human, an example being that I noticed inline dot products and magnitude calculations with the code repeated verbatim each time, expanding scalar maths and not using the maths library functions. Things that have always been implicit now need to be explicit, and currently for me at least it’s very difficult to forecast everything ahead of time. We can tweak things in plan mode but after a long time of trying to refine a single plan I am itching to accept it, to see the parts that work and then that will allow me to iterate again. This is how I naturally work, write a small burst of code, test it, run it, tweak it, and continue. Claude generates a lot of code and quickly, accepting code that partially works to just see it in action leads to immediate tech debt.

Working on a crowd simulation, agents were getting stuck on corners. I asked Claude multiple times to fix it, it piled on more code, each time claiming to have fixed it. I found myself trapped in the prompt loop death spiral. I had to force myself to sit down at the computer with no help allowed and just figure it out myself. I spent a good few hours just drawing some debug geometry and fiddling around with the problem. This is where the realisation really hit me about what I was missing. The process itself is a crucial part of development, it’s not about the lines of code in the end, it’s about the intuition you build to get there. In this session not only did I improve the agents getting stuck on corners I also gained key insights on how to further improve it and how to parameterize and control the improvements. The code I wrote to do this was actually not great, it was messy and it was thrown away straight after, but that’s OK and that is part of the process. Somehow I had lost this ability when prompting Claude; it had changed me, frozen, unable to understand the code only able to prompt again and again. It took restraint not to just reach out and ask an LLM to do the thing for me, but in pushing past that barrier I was rewarded.

Since then I have been better able to guide Claude with a new found understanding of the problem. The important detail here is building intuition and knowledge, this worries me since I feel an element of skill atrophy when it’s so easy to just ask for help. I have already put in a lot of time learning the hard way, what chance do the newcomers have when people say there is no point in learning to code anymore? How are we able to steer the LLMs if we don’t understand the problems we need to solve? Claude’s attempt at gamedev was quite poor on its own, with guidance and collaboration it was much better, so for that reason I think learning to code and learning to understand the LLM generated code will always be an important skill.

If you liked this, check out my YouTube where I’m messing around with AI and check out my mostly handwritten repos :)