AI use may speed code generation, but developers’ skills suffer
There’s a lot of hype about AI coding tools and the gains developers are seeing when it comes to speed and accuracy. But are developers also offloading some of their thinking to AI when they use them as copilots?
Anthropic researchers recently put this to the test, examining how quickly software developers picked up a new skill (learning a new Python library) with and without AI assistance, and, more importantly, determining whether using AI made them less likely to actually understand the code they’d just written.
What they found: AI-assisted developers were successfully performing new tasks, but, paradoxically, they weren’t learning new skills.
This isn’t particularly surprising, according to real-world software engineers. “AI coding assistants are not a shortcut to competence, but a powerful tool that requires a new level of discipline,” said Wyatt Mayham of Northwest AI Consulting.
AI users scored two letter grades lower on coding concepts
In a randomized, controlled trial, a group of 52 “mostly junior” developers were split into two groups: One was encouraged to use AI, one denied its use, performing a short exercise interacting with the relatively new asynchronous Python Trio library that involved new concepts beyond just Python fluency. The chosen engineers were familiar with both Python and AI coding assistants, and had never used the Trio library.
Researchers then quizzed them on their mastery of debugging and code reading and writing, as well as their ability to understand core tool and library principles to help them assess whether AI-generated code follows appropriate software design patterns.
The results: The AI-using group scored 17 percentage points lower on the quiz than a control group who coded by hand — that is, 50% compared to 67%, or the equivalent of nearly two letter grades. This was despite the quiz having covered concepts they’d used just a few minutes before.
Notably, the biggest gaps in mastery were around code debugging and comprehension of when code is incorrect and why it fails. This is troubling, because it means that humans may not possess the necessary skills to validate and debug AI-written code “if their skill formation was inhibited by using AI in the first place,” the researchers pointed out.
The experiment in depth
The 70-minute experiment was set up like a self-guided tutorial: Participants received a description of a problem, starter code, and a quick explainer of the Trio concepts required to solve it. They had 10 minutes to get familiar with the tool and 35 minutes to perform the task of coding two different features with Trio. The remaining 25 minutes was devoted to the quiz.
They were encouraged to work as quickly as possible using an online coding platform; the AI group could access a sidebar-embedded AI assistant that could touch code at any point and produce correct code if asked. The researchers took screen recordings to see how much time participants spent coding or composing queries, the types of questions they asked, and the errors they made.
Interestingly, using AI didn’t automatically guarantee a lower score; rather, it was how the developers used AI that influenced what skills and concepts they retained.
Developers in the AI group spent up to 30% of their allotted time (11 minutes) writing up to 15 queries. Meanwhile, those in the non-AI group ran into more errors, mostly around syntax and Trio concepts, than the AI-assisted group. However, the researchers posited that they “likely improved their debugging skills” by resolving errors on their own.
AI group participants were ranked based on their level and method of AI use. Those with quiz scores of less than 40% relied heavily on AI, showing “less independent thinking and more cognitive offloading.” This group was further split into:
- AI delegators: These developers “wholly relied” on AI, completing the task the fastest and encountering few or no errors;
- ‘Progressive’ AI users: They started out proactively by asking a few questions, then devolved into full reliance on AI;
- Iterative AI debuggers: They also asked more questions initially, but ultimately trusted AI to debug and verify their code, rather than clarifying their understanding of it.
The other category of users, who had quiz scores of 65% or higher, used AI for code generation as well as conceptual queries, and were further split into these groups:
- Participants who generated code, manually copied and pasted it into their workflows, then asked follow-up questions. They ultimately showed a “higher level of understanding” on the quiz.
- Participants who composed “hybrid queries” asking for both code and explanations around it. This often took more time, but improved their comprehension.
- Participants who asked conceptual questions, then relied on their understanding to complete the task. They encountered “many errors” along the way, but also independently resolved them.
“The key isn’t whether a developer uses AI, but how,” Mayham emphasized, saying these findings align with his own experience. “The developers who avoided skill degradation were those who actively engaged their minds instead of passively accepting the AI’s output.”
Interestingly, developers in the experiment were aware of their own habits. While the non-AI-using participants found the task “fun” and said they had developed an understanding of Trio, AI-using participants said they wished they had paid more attention to the details of the Trio library, either by reading the generated code or prompting for more in-depth explanations.
“Specifically, [AI using] participants reported feeling ‘lazy’ and that ‘there are still a lot of gaps in (their) understanding,’” the researchers explained.
How developers can keep honing their skills
Many studies, including Anthropic’s own, have found that AI can speed up some tasks by as much as 80%, however, this new research seems to indicate that sometimes speed is just speed — not quality. Junior developers who feel they have to move as quickly as possible are risking their skill development, the researchers noted.
“AI-enhanced productivity is not a shortcut to competence,” they said, and the “aggressive” incorporation of AI into the workplace can have negative impacts on workers who don’t remain cognitively engaged. Humans still need the skills to catch AI’s errors, guide output, and provide oversight, the researchers emphasized.
“Cognitive effort — and even getting painfully stuck — is important for fostering mastery,” they said.
Managers should think “intentionally” when they deploy AI tools to ensure engineers continue to learn as they work, the researchers advised. Major LLM providers provide learning environments, such as Anthropic’s Claude Code Learning and Explanatory modes, or OpenAI’s ChatGPT Study Mode, to assist.
From Mayham’s perspective, developers can mitigate skill atrophy by:
- Treating AI as a learning tool: Ask for code and explanations. Prompt it with conceptual questions. “Use it to understand the ‘why’ behind the code, not just the ‘what,’” he advised.
- Verifying and refactoring: “Never trust AI-generated code implicitly.” Always take the time to read, understand, and test it. Oftentimes, the best learning comes from debugging or improving AI-provided code.
- Maintaining independent thought: Use AI to augment workflow, not replace the thinking process. “The goal is to remain the architect of the solution, with the AI acting as a highly-efficient assistant,” said Mayham.
AI-driven productivity is not a substitute for “genuine competence,” especially in high-stakes, safety-critical systems, he noted. Developers must be intentional and disciplined in how they adopt tools to ensure they’re continually building skills, “not eroding them”. The successful ones won’t just offload their work to AI, they’ll use it to ask better questions, explore new concepts, and challenge their own understanding.
“The risk of skill atrophy is real, but it’s not inevitable. It’s a choice,” said Mayham. “The developers who will thrive are those who treat AI as a Socratic partner for learning, not a black box for delegation.”
Original Link:https://www.infoworld.com/article/4125231/ai-use-may-speed-code-generation-but-developers-skills-suffer.html
Originally Posted: Sat, 31 Jan 2026 01:03:41 +0000












What do you think?
It is nice to know your opinion. Leave a comment.