How AI Impacts Skill Formation: Evidence from Software Development Learning

How AI Impacts Skill Formation: Evidence from Software Development Learning

AI assistance promises significant productivity gains for developers, particularly novices learning new skills. Yet a groundbreaking study reveals a concerning trade-off: while AI can boost immediate performance, it may undermine the very skills needed to supervise AI-generated code effectively.

The Learning vs. Productivity Dilemma

Researchers conducted randomized experiments with 52 professional developers learning the Python Trio library—a relatively new asynchronous programming framework. Half the participants completed coding tasks with AI assistance, while the control group worked without AI support.

The results challenge common assumptions about AI’s role in skill development. Participants using AI assistance scored 17% lower on a comprehensive skills evaluation—equivalent to two full grade points. This dramatic difference occurred across conceptual understanding, code reading, and debugging abilities.

Surprisingly, AI assistance didn’t deliver the expected productivity boost. While some participants completed tasks faster with AI, the average completion time showed no significant improvement. The reason? Many developers spent substantial time crafting queries and understanding AI responses—up to 11 minutes of a 35-minute task.

Six Patterns of AI Interaction

Through detailed analysis of screen recordings, researchers identified six distinct ways developers interact with AI:

Low-Scoring Patterns (24-39% quiz scores):

  • AI Delegation: Asking AI to generate complete code solutions without engagement
  • Progressive AI Reliance: Starting independently but increasingly delegating to AI
  • Iterative AI Debugging: Repeatedly asking AI to fix problems without understanding root causes

High-Scoring Patterns (65-86% quiz scores):

  • Conceptual Inquiry: Asking only conceptual questions while coding independently
  • Hybrid Code-Explanation: Requesting both code generation and explanations
  • Generation-Then-Comprehension: Using AI for code, then asking follow-up questions to understand the solution

The high-scoring patterns share a crucial characteristic: cognitive engagement. Developers who remained actively involved in understanding concepts preserved their learning outcomes.

The Error Advantage

A key finding explains why the control group learned more effectively. Developers without AI encountered significantly more errors—a median of three errors versus one for the AI group. These errors, particularly those specific to the Trio library, forced deeper engagement with core concepts.

The control group encountered more “TypeError” and “RuntimeWarning” errors that required understanding asynchronous programming fundamentals. Resolving these errors independently built the exact skills tested in the evaluation.

Implications for Professional Development

The study reveals a fundamental tension in AI-assisted learning. Junior developers who rely heavily on AI to complete unfamiliar tasks may compromise their skill acquisition. This creates a dangerous cycle: as AI systems become more capable, humans need stronger skills to supervise them effectively, yet AI usage may prevent developing those very skills.

The debugging skills gap proves particularly concerning. Participants using AI scored lowest on debugging questions—the exact capability needed to validate AI-generated code in production environments.

Recommendations for Developers

Based on the research, developers learning new skills should:

  1. Ask conceptual questions first before requesting code generation
  2. Request explanations alongside any AI-generated code
  3. Embrace errors as learning opportunities rather than immediately seeking AI fixes
  4. Maintain cognitive engagement by understanding rather than just copying AI solutions
  5. Balance efficiency with learning by allocating time for comprehension

The Path Forward

AI assistance isn’t inherently harmful to skill development—the key lies in how it’s used. The most successful participants treated AI as a knowledgeable colleague rather than a replacement for thinking. They asked probing questions, sought explanations, and remained actively engaged in problem-solving.

Organizations implementing AI coding tools should consider these findings when training junior developers. While AI can accelerate certain tasks, preserving the learning process requires intentional effort and cognitive engagement.

The future of human-AI collaboration in software development depends on finding the right balance: leveraging AI’s capabilities while maintaining the human expertise needed to guide, validate, and improve AI-generated solutions.