Macro trends in the tech industry
By
Published: November 29, 2017
Twice a year we create the Thoughtworks Technology Radar, an opinionated look at what’s happening in the enterprise tech world. It’s a detailed look at tools, techniques, languages and platforms and we generally call out over one hundred individual ‘blips.’ Creating the Radar involves a significant chunk of our senior techies from around the globe, and as we discuss individual blips we also talk about bigger trends. This article is a consolidation of those “macro trends” that we see in the tech industry today.
Many of our clients are finding ways to leverage blockchain for distributed ledgers and smart contracts. Several Radar entries show maturity in this use of blockchain-related technologies, with increasingly interesting ways to implement smart contracts with a variety of techniques and programming languages. Blockchain solves two major problems. First, it allows us to establish distributed trust without relying on a “trusted by everyone” intermediary, such as a bank or a stock exchange; and second, blockchain allows us to create a shared, unalterable, trusted ledger — a record of facts. Today, we’re seeing organizations build on these two fundamental concepts. In particular, we think Ethereum smart contracts and the Corda distributed ledger are worth looking into.
The Radar backs up this trend — the majority of tools, techniques and platforms featured in this edition are for, on or in support of, cloud. We really are seeing “cloud by default” in many organizations. We’re calling out “on-premise” here but the important thing isn’t really where the servers are located. It’s the amount of work required to effectively get a service or capability up and running and to maintain it over the long term.
Virtualization led to the cloud, and we think there’s a lot of value in the NIST definition for cloud. Of NIST’s five “essential characteristics” we’d argue that two — on-demand self-service and elasticity — have been absolutely critical in the success of cloud. They’ are also the three to look for when choosing a cloud, and where many “private cloud” offerings fail.
Instead, we think IT leaders (and business) should be more willing to question whether logic written a decade ago represents how business should be done today, as well as to give their users more credit for being able to absorb a new (and overall more capable) system. Organizations should carefully examine the functionality they really need instead of recreating a complete feature-set on a new platform. You can find more in-depth discussion in this article on how we rewrote Mingle, our agile project management tool, for the cloud.
Another factor is that China is a significantly different market than the Global North, with a difference culture and perspective. This gives rise to different expectations and requirements, so it’s not always right for a Chinese company to simply follow in the footsteps of Western companies. China is such a huge market that Chinese companies are creating open-source and sharing it with each other, creating Chinese alternatives and a set of software and an ecosystem that can respond to uniquely Chinese problems.
On this edition of the Radar, we highlight Atlas and Beehive, two projects from Alibaba, that allow better modularization within an application and can help distributed or remote teams collaborate better. They let you dynamically assemble physically isolated modules into a single coherent application, and are clearly built with consideration of the China software market.
It’s important to remember that Chinese open-source is written for China first, and can be quite successful without ever moving beyond China. Documentation will be in Chinese, and if a project gets enough traction, translations may be created. There’s some very high quality software coming out of China and it could be very useful, but those of us outside China need to remember that we’re not the primary audience.
The containerization trend, and Docker in particular, has created an ecosystem where all of our tools now work with (and often require) containers. In some ways, containers are the new POSIX, the new universal interface. The IT industry has struggled for years around building software components, and it seems like containers might be our best answer so far for standardization (although, because you can stick anything you want into a container, there’s still no guarantee components will play nicely together). Other important trends — microservices, evolutionary architecture, cloud-by-default — work extremely well with containers and so there’s a natural symbiosis here too.
A few years ago, major players in the industry were talking about GIFEE — Google’s Infrastructure For Everyone Else. I haven’t heard anyone say “GIFEE” for a while, but Kubernetes basically is Google-style infrastructure that everyone can use. Google pushed hard and invested a ton of resources with the goal of getting people onto the Google cloud offering. Over time, Kubernetes became the default container platform that we deal with across vendors and cloud providers.
In addition, Kubernetes has become more accessible to run at scale. The learning curve for running resilient, production clusters has become less steep, through improvements to the core Kubernetes software, as well as better tooling and a highly active ecosystem. All of the major cloud providers now offer Kubernetes-based hosting, so there’s a very low barrier to entry.
The world expects real-time analytics. This is a fact that we need to accommodate as we design systems. We like the benefits gained from an event-based streaming architecture — loose coupling, autonomous components, high performance and scalability — but the need for streaming has been driven by the analytics requirement. Without streaming you simply cannot meet this need.
Associated with the rise in data streaming is a maturity in event-driven architecture. These systems are now commonplace and well understood. Some new techniques are on the rise, such as using streams as persistent storage of enterprise facts/state. We’re not 100% sure all of these techniques are a good idea (CQRS has bitten more than one unsuspecting implementor) but it’s clear that streaming is here to stay.
I hope you’ve enjoyed this roundup of trends in the tech industry. Comment below or reach out on Twitter if you’d like to join the discussion.
Blockchain moves beyond hype
As of this writing one BTC is worth over US$10,000, a ten-fold increase since the beginning of the year, and Elon Musk is denying he’s Satoshi Nakamoto, the mysterious inventor of Bitcoin. The hype fuels a chaotic market around cryptocurrencies as well as speculation on celebrity-driven ICOs and fears of a “massive bubble” in the value of the crypto currency. But there’s some good technology underlying this hype-ridden roller coaster.Many of our clients are finding ways to leverage blockchain for distributed ledgers and smart contracts. Several Radar entries show maturity in this use of blockchain-related technologies, with increasingly interesting ways to implement smart contracts with a variety of techniques and programming languages. Blockchain solves two major problems. First, it allows us to establish distributed trust without relying on a “trusted by everyone” intermediary, such as a bank or a stock exchange; and second, blockchain allows us to create a shared, unalterable, trusted ledger — a record of facts. Today, we’re seeing organizations build on these two fundamental concepts. In particular, we think Ethereum smart contracts and the Corda distributed ledger are worth looking into.
On-prem on-hold?
When talking about infrastructure and deployment, we’ve shifted our default for just about every customer that we’re talking to. The first question is “is there a suitable service I can consume?” before “is there something I can buy and set up using a cloud provider?”. Both get asked before an organization will consider provisioning servers, installing software, then patching and maintaining them. This decision process can be summarized as “on-prem last.” It used to be that one needed to make a case for cloud; now (where appropriate) one needs to make the case for on-prem. The discussion about cloud hosting has become much easier over the past year or so.The Radar backs up this trend — the majority of tools, techniques and platforms featured in this edition are for, on or in support of, cloud. We really are seeing “cloud by default” in many organizations. We’re calling out “on-premise” here but the important thing isn’t really where the servers are located. It’s the amount of work required to effectively get a service or capability up and running and to maintain it over the long term.
The “long tail” of virtualization
The first time we ran VMware virtual machines back in 1999, we couldn’t have anticipated how virtualization would revolutionize all aspects of software. Virtual machines are used in every piece of the chain from developer workstations through to Google-sized data centers, and are the “unit of scaling” for many systems (unless you’re Google, in which case the data center itself is the unit of scaling!). Docker, Kubernetes, and all of today’s exciting cloud technologies are made possible through virtualization.Virtualization led to the cloud, and we think there’s a lot of value in the NIST definition for cloud. Of NIST’s five “essential characteristics” we’d argue that two — on-demand self-service and elasticity — have been absolutely critical in the success of cloud. They’ are also the three to look for when choosing a cloud, and where many “private cloud” offerings fail.
The myth of feature parity
An unfortunate trend we’re seeing in the industry right now is using “feature parity” as a goal when doing a cloud migration, legacy system upgrade, or product redevelopment. It’s rarely a good idea to take a 10- or 15- year old system and simply reimplement it — bugs and all — using newer technology. An oft-cited excuse is that “we don’t want to confuse the business” or a concern about changing process or calculations, but the result is often a long, slow, big-bang delivery with lots of risk. Stakeholders are often left disappointed with a project that was late, hugely over budget, and didn’t deliver anything new for the business.Instead, we think IT leaders (and business) should be more willing to question whether logic written a decade ago represents how business should be done today, as well as to give their users more credit for being able to absorb a new (and overall more capable) system. Organizations should carefully examine the functionality they really need instead of recreating a complete feature-set on a new platform. You can find more in-depth discussion in this article on how we rewrote Mingle, our agile project management tool, for the cloud.
China rising in the world of open-source
We’re seeing a big jump in the number, and quality, of open-source projects coming out of China. Big names such as Baidu and Alibaba are releasing their code as open-source, and the rest of the world is starting to take notice. Over the past couple of years, a mental shift has occurred in how Chinese companies perceive source code. Previously, they were concerned about protecting their intellectual property and didn’t want to participate in open-source. But now they see the impact from significant projects such as Docker, Kubernetes and OpenStack, and that there is “another way” — a company can build an ecosystem instead of trying to achieve lock-in. As long as they keep influence on the open-source community, they can still control their IP, but get the benefits of open-source.Another factor is that China is a significantly different market than the Global North, with a difference culture and perspective. This gives rise to different expectations and requirements, so it’s not always right for a Chinese company to simply follow in the footsteps of Western companies. China is such a huge market that Chinese companies are creating open-source and sharing it with each other, creating Chinese alternatives and a set of software and an ecosystem that can respond to uniquely Chinese problems.
On this edition of the Radar, we highlight Atlas and Beehive, two projects from Alibaba, that allow better modularization within an application and can help distributed or remote teams collaborate better. They let you dynamically assemble physically isolated modules into a single coherent application, and are clearly built with consideration of the China software market.
It’s important to remember that Chinese open-source is written for China first, and can be quite successful without ever moving beyond China. Documentation will be in Chinese, and if a project gets enough traction, translations may be created. There’s some very high quality software coming out of China and it could be very useful, but those of us outside China need to remember that we’re not the primary audience.
Kubernetes rules the roost for container management
A year ago, we at Thoughtworks were being asked “which container management platform do you prefer, Kubernetes or Mesos?”. Today, that question is being skipped. Kubernetes is the default and the de-facto standard. But why? We think it’s a confluence of things.The containerization trend, and Docker in particular, has created an ecosystem where all of our tools now work with (and often require) containers. In some ways, containers are the new POSIX, the new universal interface. The IT industry has struggled for years around building software components, and it seems like containers might be our best answer so far for standardization (although, because you can stick anything you want into a container, there’s still no guarantee components will play nicely together). Other important trends — microservices, evolutionary architecture, cloud-by-default — work extremely well with containers and so there’s a natural symbiosis here too.
A few years ago, major players in the industry were talking about GIFEE — Google’s Infrastructure For Everyone Else. I haven’t heard anyone say “GIFEE” for a while, but Kubernetes basically is Google-style infrastructure that everyone can use. Google pushed hard and invested a ton of resources with the goal of getting people onto the Google cloud offering. Over time, Kubernetes became the default container platform that we deal with across vendors and cloud providers.
In addition, Kubernetes has become more accessible to run at scale. The learning curve for running resilient, production clusters has become less steep, through improvements to the core Kubernetes software, as well as better tooling and a highly active ecosystem. All of the major cloud providers now offer Kubernetes-based hosting, so there’s a very low barrier to entry.
Data streaming is standard
On this edition of the Radar we discussed a whole bunch of Kafka-related items: Kafka, Kafka Streams, Kafka as the source of truth, Kafka as a lightweight ESB. But where did all this emphasis on streaming come from?The world expects real-time analytics. This is a fact that we need to accommodate as we design systems. We like the benefits gained from an event-based streaming architecture — loose coupling, autonomous components, high performance and scalability — but the need for streaming has been driven by the analytics requirement. Without streaming you simply cannot meet this need.
Associated with the rise in data streaming is a maturity in event-driven architecture. These systems are now commonplace and well understood. Some new techniques are on the rise, such as using streams as persistent storage of enterprise facts/state. We’re not 100% sure all of these techniques are a good idea (CQRS has bitten more than one unsuspecting implementor) but it’s clear that streaming is here to stay.
I hope you’ve enjoyed this roundup of trends in the tech industry. Comment below or reach out on Twitter if you’d like to join the discussion.
Disclaimer: The statements and opinions expressed in this article are those of the author(s) and do not necessarily reflect the positions of Thoughtworks.