Enable javascript in your browser for better experience. Need to know to enable it? Go here.

Macro trends in the tech industry | April 2025

Another edition of the Technology Radar is out, and with it comes our expanded view into the macro trends that informed our discussions during the Radar meeting, along with observations from the broader technology landscape. Mike Mason, our Chief AI Officer who previously authored this series, took some time from his busy schedule and is back as a co-author for this piece. He's pairing with Will Amaral, the current Tech Radar product owner, to provide extra insights beyond the current Radar edition.

The AI buzz isn’t slowing down, and “vibe coding” is the new frontier

 

Excitement around AI remains strong, with new AI capabilities and use cases being announced seemingly every week. We previously mentioned the exponential growth of AI-related tools, both for general use and regular software engineering. In the past six to twelve months, AI-powered coding assistants have moved beyond basic autocomplete; modern tools can now handle complex refactoring, understand entire codebases and even execute commands. On the last edition of the Radar we noted the emergence of “agentic” coding assistants — essentially AI programmers that undertake multi-step coding tasks based on high-level prompts — and this trend continues at pace today. Early products like Cursor, Cline and Windsurf lead the way in integrating these features into the IDE and dozens of companies promise an ‘agentic’ software development solution.

 

While all of this sounds promising, it's important to note that these tools work in a supervised fashion — the human developer stays “in the loop,” guiding the AI and approving its actions. A recent example is “vibe coding” —  a relaxed workflow where developers casually instruct AI through voice or chat prompts. The concept seems appealing due to its speed and casual nature — particularly suitable for quick projects. However, the term quickly went viral, with some companies and startups claiming to exclusively use “vibe coding” for critical production code. This sparked discussions about responsible AI use, reinforcing the necessity of developer judgment and thorough code review in AI-assisted workflows. We remain skeptical of claims of software developers being 100% replaced by AI: our own experiments showed Claude Code saving us 97% of the effort on its first try, then falling flat on its face for the next two attempts.

Enterprise intelligence: The AI-ification of the enterprise

 

AI is steadily weaving itself into the fabric of enterprise operations — not just as a tool to automate tasks, but as something that could fundamentally shift how organizations make decisions, manage risk and connect with customers. We’re not all the way there yet, and transformation is still uneven. But more and more companies are starting to see the outlines of a future where AI isn’t a layer on top of the business — it’s baked into its core.

 

That raises an important question: not whether AI becomes foundational infrastructure, but how we prepare for that without getting caught flat-footed.

 

As this shift unfolds, quality assurance and governance are becoming more complex and more urgent. Traditional QA practices weren’t built to handle things like model drift, hallucinations or unpredictable behavior. So we’re seeing engineering teams begin to adopt model observability tools, eval frameworks and AI-specific testing practices — especially in industries where the cost of getting it wrong is high.

 

One of the trickier challenges emerging is what you might call “AI as shadow IT.” Individual teams are spinning up their own tools — sometimes open-source, sometimes SaaS — without going through official channels. It’s easy to see why: these tools are accessible, powerful and often solve real problems. But they also introduce risk — creating a patchwork of AI usage with little oversight or consistency. Some enterprises are starting to respond with lightweight registries, usage tracking and flexible policy frameworks to get ahead of it. It’s still early, but the intent is clear: enable innovation without losing the thread on governance.

 

There’s also a bigger, less talked-about shift happening: AI is starting to reshape how organizations are designed. This isn’t just about doing more, faster — it’s about changing who does what, how decisions get made, and where accountability sits. Roles are blurring. Assumptions about trust and authority are being tested. And it’s not just a tech issue — it touches leadership, HR and governance, too. Most companies aren’t quite ready for how deep this could go.

 

At the team level, AI is prompting developers and designers to step back and ask: are we building for humans, or building for machines? As AI tooling gets better — code generation, design suggestions, automation — it’s easy to default to speed. But some teams are pushing back, re-centering on product thinking and UX to make sure what we’re building remains meaningful and sustainable. AI can accelerate delivery, but it shouldn’t come at the cost of clarity or care.

 

The “AI-ification” of the enterprise isn’t a tidal wave. It’s more like a rising tide — quiet, persistent and shaping everything in its path. The organizations that adapt well won’t just adopt new tools. They’ll ask bigger questions — about structure, capability and trust — and use those answers to steer with intention.

Observability keeps complexity in check

 

Modern software systems are highly distributed and increasingly infused with AI components, making observability more critical (and more challenging) than ever. This edition of the Radar highlights a wave of innovation in the observability space aimed at keeping up with this complexity. First, as observability becomes increasingly important, much-needed standards gain traction. We saw a great boost in OpenTelemetry's adoption; it’s now one of CNCF’s fastest-growing projects with contributions from over 200 organizations. It fosters a vendor-neutral ecosystem and with the support of tools such as Alloy, Tempo and Loki — allows a wide range of choices and flexibility for developers.

 

Another driving force behind observability is AI, of course. Observability for AI and LLMs is a focal point with unique challenges. Tracking metrics and logs isn’t enough to detect model drift, prompt failures and hallucinations. In response, new platforms such as Arize Phoenix, Helicone and Humanloop have emerged to trace and evaluate LLM calls. These tools record prompts, track model responses and help diagnose quality issues. As teams operationalize AI, this visibility is vital for trust and reliability — much like APM (application performance monitoring) was vital for microservices.

 

AI's influence on observability is also reciprocated by the integration of AI assistance into observability tools themselves. Given the massive scale of telemetry data (logs, metrics, traces) in cloud applications, operators increasingly rely on AI to detect anomalies and pinpoint issues faster than humans could. Major monitoring platforms now embed machine learning for anomaly detection, alert correlation and root-cause analysis, such as Weights & Biases’ “Weave.”

Beyond the AI spotlight

 

It’s easy to get swept up in the excitement around AI — every week brings a new headline, a new breakthrough, or a bold prediction. But some of the most meaningful progress is happening in what we might call “traditional” software development. AI still hasn’t cracked some of our biggest day-to-day frustrations — like the persistent quirks of cross-platform frameworks. And that’s where we’re seeing familiar tools quietly evolve in powerful ways.

 

Take command-line interfaces (CLIs), for example. Even with the rise of polished GUIs, chat-based tooling, and auto-magic everything, CLI tools are not just sticking around — they're thriving. Developers keep coming back to them for their speed, control, and transparency. And with modern tools like uv and MarkItDown, we’re seeing a fresh generation of CLIs that feel both sophisticated and refreshingly simple. They’re proof that the command line isn’t fading into the background — it’s adapting to stay essential.

 

We're also seeing some interesting shifts in programming languages. While newer entrants like Gleam are starting to gain traction, others like Swift are expanding their reach well beyond their original ecosystems. Swift, in particular, is carving out a role in resource-constrained environments — an area where performance, reliability, and memory safety matter more than ever. It’s a good reminder that developers are actively seeking out tools that balance modern safety features with real-world efficiency.

Solid ground in a shifting landscape

 

While AI dominates headlines and tooling — appearing in everything from code assistants to ops platforms — its ubiquity has thrown the spotlight back on the fundamentals: data quality and reliable systems. Without high-quality, well-managed data, even the most powerful AI models falter. And the core of software, ultimately, is still about how we store, manipulate and transform data into value.

 

In our conversations, a consistent theme emerged: organizations and researchers are rethinking how they manage, serve, and retrieve data. Retrieval-augmented generation (RAG) techniques are evolving fast, because effective retrieval is the bridge between general-purpose models and organization-specific intelligence. A massive model with outdated or irrelevant context is often less useful than a smaller one with timely, high-quality data. The frontier now includes improving relevance, traceability, and explainability to make RAG more reliable and transparent.

 

But these advances mean little if the underlying data isn't cared for. Scaling AI and analytics demands a solid data foundation. Increasingly, teams are treating data not as a backend artifact, but as a first-class product — with clear ownership, quality standards, documentation, and a focus on usability. This “data product thinking” draws from concepts like data mesh, where domain teams are responsible for curating and maintaining discoverable, interoperable data assets.

 

In practice, a data product might be a customer 360 dataset, a risk-scoring pipeline, or an internal dashboard — something designed, versioned, and supported just like any software product. It has customers, provides value, and it evolves over time.

 

The message is clear: embrace the new, but don’t neglect the foundations. The next era of software won’t be built by AI alone — it will be shaped by teams that combine human creativity, machine intelligence, and strong engineering discipline. That’s where the real leverage lies.

Disclaimer: The statements and opinions expressed in this article are those of the author(s) and do not necessarily reflect the positions of Thoughtworks.

Explore a snapshot of today's technology landscape