Enable javascript in your browser for better experience. Need to know to enable it? Go here.

Local coding assistants

Published : Apr 02, 2025
Apr 2025
Hold ?

Organizations remain wary of third-party AI coding assistants, particularly due to concerns about code confidentiality. As a result, many developers are considering using local coding assistants — AI that runs entirely on their machines — eliminating the need to send code to external servers. However, local assistants still lag behind their cloud-based counterparts, which rely on larger, more capable models. Even on high-end developer machines, smaller models remain limited in their capabilities. We've found that they struggle with complex prompts, lack the necessary context window for larger problems and often cannot trigger tool integrations or function calls. These capabilities are especially essential to agentic workflows, which is the cutting edge in coding assistance right now.

So while we recommend to proceed with low expectations, there are some capabilities that are valid locally. Some popular IDEs do now embed smaller models into their core features, such as Xcode's predictive code completion and JetBrains' full-line code completion. And locally runnable LLMs like Qwen Coder are a step forward for local inline suggestions and handling simple coding queries. You can test these capabilities with Continue, which supports the integration of local models via runtimes like Ollama.

Download the PDF

 

 

 

English | Español | Português | 中文

Sign up for the Technology Radar newsletter

 

Subscribe now

Visit our archive to read previous volumes