Enable javascript in your browser for better experience. Need to know to enable it? Go here.

Complacency with AI-generated code

Last updated : Apr 02, 2025
Apr 2025
Hold ?

As AI coding assistants continue to gain traction, so does the growing body of data and research highlighting concerns about complacency with AI-generated code. GitClear's latest code quality research shows that in 2024, duplicate code and code churn have increased even more than predicted, while refactoring activity in commit histories has declined. Also reflecting AI complacency, Microsoft research on knowledge workers found that AI-driven confidence often comes at the expense of critical thinking — a pattern we’ve observed as complacency sets in with prolonged use of coding assistants. The rise of supervised software engineering agents further amplifies the risks, because when AI generates larger and larger change sets, developers face greater challenges in reviewing results. The emergence of "vibe coding" — where developers let AI generate code with minimal review — illustrates the growing trust of AI-generated outputs. While this approach can be appropriate for things like prototypes or other types of throw-away code, we strongly caution against using it for production code.

Oct 2024
Hold ?

AI coding assistants like GitHub Copilot and Tabnine have become very popular. According to StackOverflow's 2024 developer survey, "72% of all respondents are favorable or very favorable of AI tools for development". While we also see their benefits, we're wary about the medium- to long-term impact this will have on code quality and caution developers about complacency with AI-generated code. It’s all too tempting to be less vigilant when reviewing AI suggestions after a few positive experiences with an assistant. Studies like this one by GitClear show a trend of faster growing codebases, which we suspect coincide with larger pull requests. And this study by GitHub has us wondering whether the mentioned 15% increase of the pull request merge rate is actually a good thing or whether people are merging larger pull requests faster because they trust the AI results too much. We're still using the basic "getting started" advice we gave over a year ago, which is to beware of automation bias, sunk cost fallacy, anchoring bias and review fatigue. We also recommend that programmers develop a good mental framework about where and when not to use and trust AI.

Published : Oct 23, 2024

Download the PDF

 

 

 

English | Español | Português | 中文

Sign up for the Technology Radar newsletter

 

Subscribe now

Visit our archive to read previous volumes