Whenever we publish a Technology Radar, I take a look at broader trends within the technology industry that don’t necessarily make it onto the radar as blips or themes. Creating the Radar is an excellent chance to dissect what’s going on in the software world; here’s the latest.
FinOps, GreenOps and sustainability
As more organizations finally complete their moves into the cloud, many are asking themselves when the promised cost savings will materialize. Unfortunately it’s quite common for IT expenditure to increase in the cloud, especially when a simplistic “lift and shift” strategy is used. It’s easy to end up with “cloud sprawl” because, unlike in a traditional physical data center, there’s nothing preventing you from spinning up more instances, using more servers, creating more environments and running up that cloud bill at the end of the month.
FinOps is an approach that fosters optimal cloud usage by giving product teams visibility and responsibility over their own cloud expenditure. Just as DevOps taught us that operability improves when development teams are responsible for operating their software, FinOps works through the hypothesis that cloud spend is best controlled when teams are responsible for managing development, infrastructure and financial trade-offs — with the right support. A key insight of FinOps is that to maximize return on investment, we need to optimize for unit cost, rather than overall cost. Broad “cost cutting” drives from infrastructure or IT departments aren’t effective because they aren’t in a position to understand which parts of the spend are valuable and which aren’t. The definitive Cloud FinOps from O’Reilly is a great book to read on the topic, and FinOps Foundation offers training and certification for people looking to take this further.
One of my colleagues once described cloud computing as “a fancy way to charge for electricity” and this leads to an interesting overlap — optimizing cloud usage tends to lead to a reduction in emissions and better sustainability numbers. It’s often easier to get people interested in FinOps by using a sustainability argument than just the dollar figures, and this leads to the related field of GreenOps.
Carbon efficiency of cloud environments is beginning to be a hot topic, with most cloud vendors now providing a “carbon dashboard” to give customers a better idea of how their usage relates to emissions. Cloud Carbon Footprint, an open-source tool from Thoughtworks, allows you to do this across platforms, which is important since most organizations are using more than one cloud vendor these days. Tools like this allow developers to see their carbon footprint and observe how it changes as they tune their software or change their architecture. As it can be difficult to know how to make improvements, initiatives like the Principles of Green Software Engineering can help.
SQL is dead, long live SQL
Invented in the 1970s, SQL has been an industry mainstay for nearly half a century. Widely used by programmers, business analysts and data scientists, it’s a tool that just gets the job done. Despite healthy competition from “NoSQL” databases, SQL continues to ride relational stores into practically every corner of every enterprise. Not only that, SQL as a domain-specific language for working with data has proven robust and flexible enough to find new niches in the advanced tooling supporting analytics and machine learning applications.
While creating this edition of the Radar we noted just how many times SQL came up as a tool for working with data. dbt is the poster-child here, offering an integrated workflow for data transformation and analytics. This edition features dbtvault, which is a set of dbt jinja templates that conform to the Data Vault 2.0 model and methodology. We also recently covered SQLFluff, which is a cross-dialect SQL linter that’s easy to incorporate into a CI/CD pipeline.
On the topic of SQL, we’re also seeing increasing interest in “newSQL” databases. These are SQL databases that scale and distribute globally while still providing transactional processing. Examples include CockroachDB, Couchbase, Google Spanner, and others. They have the sharding, partitioning, replication, and concurrency controls that made NoSQL databases popular a few years ago.
Data scientists and machine learning experts are in high demand across the industry, so it’s important to properly support those people with a good platform experience and provide a smooth path from prototype to production.
Data scientists and machine learning experts are in high demand across the industry, so it’s important to properly support those people with a good platform experience and provide a smooth path from prototype to production.
The mainstreaming of machine learning
World-leading AI research – from DeepMind, NVidia, Google, Microsoft and others — is astounding. What we’re seeing today is a kind of “trickle down” effect from these research pioneers, with machine learning becoming more widely used, more useful, and less of a “special case” than it used to be. In this edition of the Radar we’ve put more ML-related blips into trial than in the past; we think this indicates ML is “coming of age” — it’s now something that every team should be looking at in order to solve problems.
To work efficiently with ML requires good platform support, whether that’s an in-house platform or one from a cloud vendor. We see this as an emerging battleground between the cloud providers. For example, Google’s Vertex AI brings together tools to manage data, features, model training and deployment, and even AutoML, to streamline getting valuable ML models into production. Data scientists and machine learning experts are in high demand across the industry, so it’s important to properly support those people with a good platform experience and provide a smooth path from prototype to production.
Other blips on this Radar include TinyML, a technique and set of libraries for deploying ML models on resource-constrained devices such as microcontrollers, and federated machine learning, an approach that allows users to keep their data on-device but still benefit from sophisticated ML models. We also featured Stable Diffusion which is an open source AI image generator, similar to Google’s Dall·E.
My advice here is that businesses should look at machine learning the same way DeepMind does. They have people looking at the world’s most complex algorithmic problems — protein folding, controlling fusion reactor plasma with magnetic fields — and saying “could we do that with AI?” Technologists should be doing the same thing, but for business or operational problems within their organization. Within just a few years, almost all useful software is likely to contain an ML component. We should all get ahead of that curve if we can.
Developer platforms are products, too
The advice to treat everything you build as a product isn’t new, but it’s difficult to do right. It’s one thing to understand that treating something as a product means it needs to have a defined set of customers and a value proposition, it’s something else to be really effective with this thinking and carry it through to a logical conclusion.
We see a couple of failure modes around “product thinking.” First is a failure to really research the ‘customer’ needs for the platform. Too often, a centralized team will build a platform in a style that they think is useful, but which doesn’t actually coincide with developer’s needs. This is often masked by the second antipattern, which is mandating the use of a particular development platform. If teams have no choice, it’ll help with platform adoption but won’t tell you anything about how good a platform actually is. It’s better to try to make the platform the easiest, most effective way of doing something, and letting teams go their own way if they absolutely must.
As with other important concepts — continuous delivery, DevOps, microservices — the “platform as a product” metaphor only works when we think deeply about it and follow all the implications, rather than just the surface stuff. When something really is a product, it forces the team who owns it to do things that range from not particularly fun to downright difficult. In other words, to produce something successful, things like documentation, developer onboarding and elegant upgrade paths can no longer be viewed as helpful but marginal requirements: they become essential.
Real code on low code
The global demand for software refuses to abate, which isn’t surprising since technology, and therefore software, is core to the majority of businesses. “Low code” and “no code” platforms offer a potential alternative to traditional coding, and although they are not new, it feels like both the capability and the adoption of these platforms continues to grow, they are now being used to create ‘serious’ mission-critical applications.
Many organizations treat low code as an “and” with their existing development capabilities. That is to say low code doesn’t necessarily replace traditional hand-crafted software — although in some cases it might — but it allows us to build software that simply wouldn’t have been written without the low code option. There’s plenty of value to be had from simple software that meets business goals. Depending on the situation, low code may be able to step in to offer a solution when traditional IT is oversubscribed, expensive, or backlogged.
On a cautionary note, we think there’s too much vendor hype in the low code landscape. It’s still software, after all, and many of the concerns we have in traditional IT — building the right thing, collaborating effectively, testing, deploying and measuring value — still apply when using a low code platform. Instead of answering those questions ourselves, however, we need to rely on whatever the low code platform offers in terms of tooling and process. We think that effective use of low code is a collaboration between IT and business, and that low code should be considered a “tool in the toolbox” rather than a silver bullet.
Mobile should still be modular
Many companies release multiple different mobile apps, each with separate functionality. Right now on my phone I have three different apps from my cell phone provider. I’ve got a home Wi-Fi configuration app, a “Wi-Fi finder” app for when I’m out and about, and yet another app to look at my usage and manage features on my phone line. As a customer, I have to remember which app to use each time, which is annoying; from the phone company’s perspective, they have several apps to keep track of, each of which might create an inconsistent user experience or reflection on their brand.
In response, many companies are bundling their apps together into a single, larger app (and sometimes these larger apps become a “Super App”, almost a platform within an app). A single app is easier for customers to use but introduces a challenge: apps are one big binary, with a monolithic app store release process, and so they need to coordinate multiple chunks of functionality going into a single app. This often leads to several different development teams tripping over each other, delaying each other’s releases, and potentially even introducing bugs in unrelated areas of the application.
Our advice is to use an architectural approach that allows modularity and separation between the modules inside a larger application. This might not seem like rocket science, but we regularly see organizations running into trouble with these kinds of larger apps. If you’re seeing teams blocked on each other, or a general high cost of coordination, consider re-evaluating whether you’re applying enough modularity to your app.
Well, that’s it for this round-up of industry trends. I always enjoy working on the Radar and I hope you enjoy reading it. Many thanks to Ajay Chankramath, Camilla Crispim, Chris Ford, Marisa Hoenig, Rebecca Parsons, Vanya Seth and Scott Shaw for their input and contributions.
Disclaimer: The statements and opinions expressed in this article are those of the author(s) and do not necessarily reflect the positions of Thoughtworks.