Are AI Assistants Making Us Worse Programmers?
Perspectives on the rise of AI in programming and its effects on productivity.
They are everywhere now – from Copilot to Cursor, Zed to Aider – AI assistants have permeated IDEs and programming tools. Relying on an ever-changing ecosystem of models, many of them tuned to excel in programming tasks, it is safe to say one needs to make a conscious decision in order to avoid using AI during a typical day as a programmer.
I can tell from experience: I tested almost all of them and integrated them somehow into my workflow. I also have over 10 years of programming experience, so I know well how is to not use such assistants (the effect they can cause on a younger cohort intrigues me but I won't explore in this essay.)
I decided to reflect on this ubiquitous adoption trying to answer the question: is the usage of AI assistants making me a worse programmer?
Of course much of what I have to say is entirely personal, but I hope I can draw from fundamental principles that extend beyond programming, and therefore you can resonate with it giving your own experience.
Does the world now require less skills than in the past?
Being born in the 90s in Brazil you can only imagine how fresh it was in every adult mind the figure of Ayrton Senna. One of the most talented racing drivers to date, obsessed with performance, Senna tragically but somewhat poetically died in an accident in San Marino's circuit in 1994.
Senna quickly became immortalized as one of the greatest of all time, and a recurring theme in generational debates about who was better – typically with an addendum that makes the exercise even more impossible to resolve – in his peak.
For those who would pick the guys from the old-days, a typical argument is: "nowadays pilots have it all automated, back in the day they had to know it all". Heck, I heard my father say countless times that they used to control the car by hand. I don't pretend to know Formula One enough to take sides here, but I think this argument has a point worth discussing.
In programming, high-level languages many times abstract the complexity away from you – something developers working with JavaScript, Python, Java, etc, know well. It seems clear that AI assistants introduce a new human-machine interface – natural language. Is that bad?
Distinguishing core skills from tool-specific knowledge
There is a heated debate about whether going to college to study Software Engineering or Computer Science is relevant in a world where you have access to countless courses either for free or for very cheap (relative to formal education).
I hold a degree in Computing Engineering and don't have a formal opinion. Why? Because in college I learned to understand computers. I studied the fundamentals, and they stuck. In contrast, 99% of the tools I had to learn are either outdated or simply useless.
However, those years honed my core skills, knowledge that is exterior to the specific tools you have at your disposal, principles that govern the activity of being a software engineer. And that is hard to get from crash courses and bootcamps.
Note that I'm not saying at all that bootcamps and crash courses are useless, just that by their own nature you won't have the time required to develop the core skills, trading this time for mastering tool usage at professional level (it's no wonder a lot of AI takeover doomists come from this space.)
Same as Senna knew things from racing cars that maybe the newer generation doesn't have to, we should ask: do I really need the specifics we used to know in the past? Especially when technologies are so transient?
I will give another example. I started working with Gatsby around 2018. Back then, it looked like a nice framework to build static sites, with a nice appeal to dynamic apps as well. I developed countless apps using Gatsby, engaged in public competitions and can safely say that I had at some degree mastered its usage.
Then... it was gone. Next.js and other frameworks like Astro took over and Gatsby became suddenly unmaintained. All the specifics I had to learn to operate the framework at a high-level are now useless for my current projects, and the herculean effort I had to put on to migrate all my apps were definitely painful.
What I did took from this episode, though? I learned about Server-Side Rendering and Static Site Generation ages ago – so when React 18 adopted server components as default I already had a mental model for it. Moving to Next.js app router was also easy, as I could relate to the reasoning behind it. But my knowledge of Gatsby plugins, gatsby-node, gatsby-config, and even GraphQL is now buried.
That's because knowing about Server-Side Rendering is a core skill, it is fundamentally tied to web development and how the internet operates. Should I build this page statically or dynamically? This is a judgement call that I – a human – need to make based on my knowledge. Once this decision is taken, AI can safely aid me with the implementation details – they won't matter for the most part.
The case for AI assistance
Until now I'm taking a defensive stance on why leveraging AI assistants may not dwarf your skills. But I also want to offer a more positive view.
One thing that I always liked to do is to translate a business requirement into actionable pieces that can be taken by a developer. Therefore, I need to work on my abstraction capabilities whenever I am presented with a problem (that's why I like the term software engineer, as someone who uses software to model and solve real-world problems.)
Therefore, it comes really naturally to me to break down a problem into smaller steps, take them in the right order and assess the result in the end. Turns out this is a pretty decent framework to employ with AI assistants.
Anyone who has tried an AI assistant will recognize that their capability quickly deteriorates as you add more context, even if their context window is large. Hence, being able to feed them precisely what they need to figure out the implementation details is crucial (that's why I love Zed's approach to this problem.) Well, this is exactly what humans are good for. Sure, models like OpenAI's o1 are impressive in reasoning, but nowhere close being self-sufficient on the end to end process: reasoning, implementing, assessing.
Moreover, every programmer recognizes how context switching is a productivity killer. Regardless of how small the interruption can be, it will affect your ability to concentrate. In Deep Work, Cal Newport mentions research that proposes that we have a limited amount of willpower available each day. Another concept explored by the author is the one of attention residue, the fact that changing activities too often will retain part of its context in our mind, effectively reducing our cognitive ability for the next tasks.
AI assistants are context switch killers, especially when integrated in the IDE. Don't recall a specific TypeScript syntax? Highlight the variable and ask the model to type it for you. Want to break down a large component into smaller ones? Ask for a file refactor and save some cognitive energy. Do you really recall by heart how to mock an API call in a unit test? I've been doing this for ages and never remember the syntax. The AI assistant thrives in such tasks and help you to stay in the flow.
Take UI animations, for instance. Is it really important to master the API of framer-motion or react-spring? As long as you can consciously employ animations on the right measure, without bloating the interface, you are good to go. And it is up to you – not the machine – to find the right balance.
When migrating away from Gatsby, I decided to move all files from JSX to TSX. The amount of time I saved by prompting the assistant to rewrite the components adding types is unprecedented. However, I did trade off honing my TypeScript skills for the sake of having it done quicker. Does that matter? I don't think so. In that case, TypeScript is a means to an end. Who knows if we'll have StaticScript or UberScript before long? I have seen enough.
When it can go wrong
Surely by now you believe I'm the captain of the hype train, over-excited with my NVDA shares (disclaimer: I don't hold any) and therefore ready to defend AI usage in all cases. Not quite: there are definitely problems that can arise from the abuse of AI in one's workflow.
The biggest one by and large is the over-reliance trap. This happens when you blindly delegate the work to the AI, and it fails to resolve the problem (ironically, this is usually very easy to spot right at the beginning.) Then you prompt it again. And again. The context window starts to blow up, you lose your manners... the apologies are enervating, you curse at the computer: "are you the stupid machine that is supposed to take over the world?"
When this happens you lose way more than just your time (and temper). You risk depleting your willpower stock (in Newport's jargon) plus you now need to interact with code you did not write. The work that could have started as a clean slate now became a pair programming session with a junior developer who wrote bad code.
I've been trapped by that many times, and it usually ends with frustration when I realize that I spent more time than I would had I not used AI at all.
How to avoid that? Well, by being more human. By making better judgement calls on when is the right time to delegate to the AI; to make sure that you were able to break down the requirements into smaller, manageable tasks that an assistant could solve alone. This is the prompt engineering that I believe in. If AI assistants are a new interface, you will thrive if you learn how to interact with it.
Knowing when the problem is solved
I had a math teacher that, when explaining a tricky problem, would call our attention when the hardest part had been solved. The problem still had to be completed, but in his words, the only thing left was bureaucracy.
This is also the case with software engineering. Most of my work isn't particularly challenging. They just require abstraction, patience, order, and knowledge of the tools required to complete the task. It's important to know when a problem is solved, delegating to the AI the bureaucratic work. This frees us to deploy our best weapon – the human intellect – to solve the hardest piece of the puzzle.
For example, suppose that you are tasked with adding a dropdown that allows the user to select how a table is sorted. There is nothing challenging here: the sorting algorithm is basic, the UI is standard, if you are lucky there is even a common component to reuse... All you need to do is stitching things together, envisioning how they will interact, even though the actual implementation will vary depending on the framework. In other words, it is a solved problem. In such cases – which are far more common than many like to admit – the assistance of an AI model is not only handy, but likely the best course of action.
Conclusion
There is a case to be made that using AI assistants can rust your programming skills. It is true that if used the wrong way it can also make you spend more time than you should, plus take a cut at your limited stock of willpower. And frankly, it won't replace humans nowhere near in the future.
However, it is an amazing tool – and as all tools, it is means to an end. The ultimate goal of a developer is to translate to a language that the computer can understand instructions to actions that will have impact in the real-world. And this ability requires constant fostering, which gives you an edge over the machine.
Assistants are incredibly useful when dealing with implementation details, the kind that don't affect your core skills, but are rather just requirements of transient technologies. Letting it type that HTML event variable for you, or create from scratch that pure function whose inputs and outputs are known, won't make you a bad developer – on the contrary, will preserve your mind to tackle the hardest problems, allowing you to focus on the right things, design comprehensive solutions, and ultimately become a more productive developer.