We’ve got another Technology Radar completed, and with that comes another chance to expand on the broader macro trends that informed our discussions during the Radar meeting, along with ideas from the wider Thoughtworks community. Mike Mason, the author of the previous editions of this piece, has a new day job as our Chief AI Officer and — as you can imagine — has his hands quite full right now. Rebecca Parsons is focusing more on our industry tech leadership, having taken on the role of CTO Emerita, handing the CTO role over to Rachel Laycock. As a result, for this macro trends piece, Mike and Rebecca are pairing; Rebecca will take the series on going forward.
New infrastructure automation approaches emerging
While the idea of Infrastructure as Code (IaC) has been around long enough for the third edition of the book to be underway, we have new tools and even entirely new approaches to handling infrastructure. Architecture as Code (AaC), and Infrastructure from Code (IfC) are techniques arising in part due to new architectural capabilities that are becoming more prevalent in cloud native systems. Tools such as Checkov and Orca, and techniques such as Platform Orchestration is a new entrant here that came up in our Radar discussions.
Composable architecture
In other areas of architecture, the concept of composable architecture is becoming increasingly relevant, particularly in the context of retail and financial services applications. Composable architecture supports something of a best of breed approach to core systems, like banking systems. Vendors are embracing this architectural style, resulting in several options for organizations including the MACH alliance for eCommerce, and Mambu, 10X and ThoughtMachine for core banking. This new approach provides a new choice for modernization that's now being heavily adopted.
Spatial computing shaping up
We’ve previously described the XR industry as “waiting for its Apple moment.” When it finally arrived (in June 2023) it was, well, kind of a mixed bag. Apple revealed advances in headset technology, a decision to move the battery off the headset onto a ‘puck’ (with possible future expansion options), lots of glossy adware, and an eye-watering “professionals only” price. But it’s here and XR/spatial computing continues its journey of industry adoption. The reduced hype surrounding this technology is probably a good thing; it means organizations have some breathing room to get real value out of XR and build solid, useful experiences rather than force-fitting the technology.
We’ve also seen a surge of interest in “AI plus XR,” because these two technologies could combine to create compelling customer experiences. Generative AI could quite literally be a way to ‘dream’ new worlds into existence, providing much-needed content for XR.
Governance of citizen developer created apps
With the rise of AI-assisted software development and the increased potential of low-code and no-code platforms, more development will potentially be in the hands of non-developers or potentially less experienced developers. In particular with AI-assisted software development, we really don’t know how this will play out in actual production situations, and the accelerated pace of AI-created software may mean there’s significantly more software being put into production. Thus, we believe that IT departments will want to think about both changing their governance processes and consider whether the mix of roles or even the skills within their teams need to change.
Green software
With all the attention the climate crisis is getting, with the storms, the fires, and the record temperatures, to name a few, you would think green software engineering would be all the rage. Thoughtworks open sourced the Cloud Carbon Footprint tool a couple of years ago to help organizations understand their cloud spend and find improvements to reduce the carbon footprint of their cloud usage. Thoughtworks is also one of the founding members of the Green Software Foundation which is creating tools to help developers write more energy efficient software.
While we’ve been involved here for quite some time, we have noticed, at least anecdotally, an increase in interest in green computing more broadly, which is encouraging. This is further evidenced in the publication of over 20 papers on greening software in arxiv.org this year (2023) alone.
Given the additional pressures of the energy requirements for cryptocurrencies and the energy requirements for training large language models, the need to take energy usage into account is increasing. That said, like everything else, there are tradeoffs. If you are only going to run something once, it is perhaps not sensible to spend time trying to optimize it for energy. Some of the tradeoffs are even more subtle. We know, for example, that continuous integration and continuous delivery allow for rapid feedback to developers on how well the software is working and where potential bugs might lie. We also know that the shorter the feedback cycle, the easier it is to fix bugs. However, each time the build runs, we use energy. There’s a balance point between energy savings from fewer builds and longer feedback cycles from the less frequent builds making bugs harder to find, which teams must discover for themselves.
However, accurately measuring and quantifying continues to be a challenge. There are several tools (including open source ones such as Cloud Carbon Footprint) and metrics such as Green Index (GI), Software Carbon Intensity (SCI) that measure the green-ness of the software. The decisions for the tradeoffs can be informed from this data.
Productivity measurement
The issue of developer productivity has been bandied about since the earliest days of computing. Productivity tools have been touted as the solution to the problem of not enough developers. Low-code, no-code, CORBA, CASE tools, 3g and 4g languages systems — the list is long. Of course, joining the list is AI-assisted software development, which we discuss below. All these attempts to increase productivity have bumped up against the problem of there being no universally agreed definition of developer productivity. Lines of code, story points or any other activity metric don’t get at value. Furthermore, there are many other activities, like testing, analysis and design that go into creating a piece of software, and using AI to churn out reams of code may not actually help you go faster.
We explore this issue in more detail in Ken Mugrage’s article about embracing complexity. Suffice it to say, even with all the desire to measure developer productivity, we still believe that team productivity is more important than individual productivity, and that value delivered is the ultimate goal.
Monolith revivalism
We’ve been enthralled as an industry with microservices for quite some time, and admittedly with good reasons. However, it has always been the case that the use of microservices involves increased implementation complexity and requires greater development and operational maturity, as outlined in Martin Fowler’s article on the subject. Recently, though, we’ve seen a revival of interest in monoliths, and frameworks such as Spring Modulith that help support the creation of a well-structured monolith. As many of us have been asserting for years, it’s not that it is impossible to get many of the benefits provided by microservices with a monolith, although you do not get the independent scaling capabilities, for example. It requires more discipline, however, to maintain the well-structuredness if you will of a monolith. As with all decisions relating to architecture, the “it depends” answer is still the best. Architects and developers need to consider the organization's maturity, their understanding of the domain, and the desired scaling characteristics, to name just a few of the potential contributors to the ultimate decision of 'to monolith or to microservice?’
AI-augmented software development
This edition of the Radar includes more than two dozen AI-related technologies, many of which are to do with the craft of software development. The sheer number of proposed blips forced us to group some of them when usually we prefer to include each item distinctly. This might ease over time, but for now there’s a lot to make sense of and the landscape changes rapidly.
Code generation is the most obvious use case, with tools including GitHub Copilot, Codeium and Tabnine. There is intense competition in this space, and we expect to see IDE-based code generation become a standard part of the developer experience. It’s not just proprietary tools here, either, with open-source models becoming an option especially for organizations concerned with maintaining strict security over their codebases.
But coding is just one of the activities that teams perform in order to create software. We see great promise in AI across the entire development process and many of our teams are actively experimenting with “AI beyond code generation” for requirements analysis, searching a team or company knowledge base, or for understanding legacy code bases.
GenAI does people things, not software things
There’s been a lot of momentum in attempting to apply Generative AI (usually ChatGPT) to an increasingly wide range of tasks. By its nature, GenAI gives a different answer even if you ask the same question over and over — making it less like traditional software where we expect determinism and reliability – but excels at ‘human’ tasks such as writing, analysis, and chatting. The unpredictability makes GenAI worrisome in some situations, but it’s great in low risk cases where failure isn’t a problem or can be picked up by a human in the loop. For example, an AI chatbot can now be trained on years of historical customer interactions and does a good job of understanding customer needs and being helpful. Customers know they’re talking to an AI, so can always upgrade their interaction to a regular customer service agent if they wish. Risk-averse organizations might prefer not to let an AI talk to their customers unchecked, but an AI helping an agent analyze a customer situation and compose responses can still deliver significant value.
Tools and models that underpin GenAI
Generative AI became popular through large language models (LLMs), the type of machine learning model that underpins ChatGPT and other high profile AI chat bots. We’re now seeing a cambrian explosion of LLMs, each created using different philosophies and training methods. Many of these are openly available and usable directly by developers, provided they have the hardware to run them. We’re also seeing other techniques, such as small language models (SLMs) which can perform well in narrow tasks. A related technology, vector databases, can be used to extend the functionality of a pre-built LLM, creating customized AI without the expense of training a model from scratch. We’re even seeing vector databases included as an option within existing platforms such as Postgres.
Ethical concerns from the rush to AI
While there’s significant positivity in the technology community around AI, there are concerns too. Generative models are trained (for the most part) on content scraped from the internet, including copyright or otherwise protected words, images, and video. Many artists are concerned about diffusion AI plagiarizing their work, and there are active lawsuits against some of the AI model vendors. We won’t know the results until this plays out in court, but while it might end up legally acceptable to scrape content into a model and regurgitate variations of it, there’s a question of whether it’s ethical to do so. We think organizations need to think carefully about the ethics surrounding their use of AI. As is often the case, “doing the right thing” by your customers as well as all the other stakeholders will likely be a better guide than the specifics of the laws, as they evolve far more slowly.
Disclaimer: The statements and opinions expressed in this article are those of the author(s) and do not necessarily reflect the positions of Thoughtworks.