If you're building and operating a scaled microservices architecture and have embraced Kubernetes, adopting service mesh to manage all cross-cutting aspects of running the architecture is a default position. Among various implementations of service mesh, Istio has gained majority adoption. It has a rich feature set, including service discovery, traffic management, service-to-service and origin-to-service security, observability (including telemetry and distributed tracing), rolling releases and resiliency. Its user experience has been improved in its latest releases, because of its ease of installation and control panel architecture. Istio has lowered the bar for implementing large-scale microservices with operational quality for many of our clients, while admitting that operating your own Istio and Kubernetes instances requires adequate knowledge and internal resources which is not for the fainthearted.
Istio is becoming the de facto infrastructure to operationalize a microservices ecosystem. Its out-of-the-box implementation of cross-cutting concerns — such as service discovery, service-to-service and origin-to-service security, observability (including telemetry and distributed tracing), rolling releases and resiliency — has been bootstrapping our microservices implementations very quickly. It's the main implementation of the service mesh technique we've been using. We've been enjoying its monthly releases and its continuous improvements with seamless upgrades. We use Istio to bootstrap our projects, starting with observability (tracing and telemetry) and service-to-service security. We're closely watching its improvements to service-to-service authentication everywhere in and outside of the mesh. We'd also like to see Istio establish best practices for configuration files to strike a balance between giving autonomy to service developers and control to the service mesh operators.
When building and operating a microservices ecosystem, one of the early questions to answer is how to implement cross-cutting concerns such as service discovery, service-to-service and origin-to-service security, observability (including telemetry and distributed tracing), rolling releases and resiliency. Over the last couple of years, our default answer to this question has been using a service mesh technique. A service mesh offers the implementation of these cross-cutting capabilities as an infrastructure layer that is configured as code. The policy configurations can be consistently applied to the whole ecosystem of microservices; enforced on both in and out of mesh traffic (via the mesh proxy as a gateway) as well as on the traffic at each service (via the same mesh proxy as a sidecar container). While we're keeping a close eye on the progress of different open source service mesh projects such as Linkerd, we've successfully used Istio in production with a surprisingly easy-to-configure operating model.