The shift in mindset needed for Kubernetes adoption (Part 2)
Published: March 08, 2021
In part 1 of the article, we discussed how organizations could employ the 4C framework to better adopt Kubernetes, the new age infrastructure . Now let’s look at the shift in mindset that's needed from the development team’s perspective.
On-logs aspect
Let's consider the scenario where an application is logging into the file system rather than an i/o and error stream. If we packaged this application without any modification and ran it as a container workload in Kubernetes, the container could be terminated when the disk pressure is too high on the node. Instead, each running process writes its event stream, unbuffered, to Standard Output (Stdout), collected and sent to the central logging system.
Disposability aspect
While containerizing the application it’s important to consider fast startup time and graceful shutdowns. In Kubernetes, starting an application doesn’t mean it’s ready for traffic. For e.g. applications should start respecting the SIGHUP signal to close out existing requests, thus avoiding errors.
We’d suggest developers begin with some unlearning of conventional practices and methods for application deployment to VM, and adopt the twelve-factor methodology to create cloud native applications to make the smooth transition to Kubernetes.
Developers should embrace an automated and sustainable way of building and maintaining infrastructure by following software development engineering practices. They should treat all infrastructure (Kubernetes or K8s) setup as code – scripts, configurations and check-in to version control.
Also, all infrastructure code should have idempotency in execution. To learn more about the paradigm, we recommend reading Thoughtworks’ Cloud Practice Lead, Kief Morris’s book, ‘Infrastructure as Code’.
Anti-patterns are the best way to learn from industry experiences. Here’s a quick list of those we’ve observed, of which many are specific to the mindshift needed from the world of VM to Pods in K8s.
Missing healthcheck probes
Not using readiness, liveness and startup probes for effective health checks can lead to unexpected error-rates in servicing requests. This usually happens when K8s forwards requests to Pods that are not ready to take those requests. For instance with Java Virtual Machine (JVM) based applications, if a request is sent to the process before the significant warm up is completed, it results in an error.
Unbounded resource limits
Deploying Pods without resource limits (like memory and CPU) can affect other applications in the same cluster, sometimes even bringing down nodes. This is also called the ‘noisy neighbour problem’ in the K8s world. To avoid it, we suggest defining quotas for resource management at the Pod level and the namespace level with defaults.
Environment specific docker images
Building different docker images for different environments, rather than effectively leveraging ConfigMaps and Secrets leads to untested images (code) reaching production - resulting in production downtime.
One kubeconfig for all
Using a single admin configuration for all team members is a huge security risk. We recommend using service accounts and role-based access for a K8s environment.
Abusing the sidecar
VM world rookies, more often than not misuse the sidecar. It’s very common in VM to have multiple applications running on the same application servers and sometimes even databases. While Kubernetes technically allows one to run multiple containers in a single Pod, it’s not recommended. Mainly because it violates the single responsibility principle of Pods. Only when the application is tightly coupled with the other container and demands it, is this alright. This anti-pattern stems from the assumption of Pods being similar to a VM.
Single cluster for all environments
Mixing both production and non-production workloads in the same cluster gives rise to issues like the misuse of namespaces to create a logical division when managing environments on a single cluster. In such a setup, when the cluster goes down, it affects all ‘environments’ and we lose the ability to perform experiments in lower environments.
Not using package manager
Leveraging self-managing K8s YAMLs initially and only for a few resources might sound alright, however it becomes cumbersome as complexity increases. Instead use tools like Helm, as it’s not only a package management tool for K8s, but also possesses features that allows one to carry out complete release management using Helm CLI.
In summary, here’s a quick overview of the mindshift required to better equip organizations and teams as they adopt a Kubernetes ecosystem:
The development team’s lens
Twelve-Factor App
The development team usually faces challenges when running applications on Kubernetes. Our advice is to follow the Twelve-Factor App principles usually leveraged when building cloud native applications
On-logs aspect
Let's consider the scenario where an application is logging into the file system rather than an i/o and error stream. If we packaged this application without any modification and ran it as a container workload in Kubernetes, the container could be terminated when the disk pressure is too high on the node. Instead, each running process writes its event stream, unbuffered, to Standard Output (Stdout), collected and sent to the central logging system.
Disposability aspect
While containerizing the application it’s important to consider fast startup time and graceful shutdowns. In Kubernetes, starting an application doesn’t mean it’s ready for traffic. For e.g. applications should start respecting the SIGHUP signal to close out existing requests, thus avoiding errors.
We’d suggest developers begin with some unlearning of conventional practices and methods for application deployment to VM, and adopt the twelve-factor methodology to create cloud native applications to make the smooth transition to Kubernetes.
The Infrastructure as Code paradigm
Developers should embrace an automated and sustainable way of building and maintaining infrastructure by following software development engineering practices. They should treat all infrastructure (Kubernetes or K8s) setup as code – scripts, configurations and check-in to version control. Also, all infrastructure code should have idempotency in execution. To learn more about the paradigm, we recommend reading Thoughtworks’ Cloud Practice Lead, Kief Morris’s book, ‘Infrastructure as Code’.
Anti-patterns to avoid
Anti-patterns are the best way to learn from industry experiences. Here’s a quick list of those we’ve observed, of which many are specific to the mindshift needed from the world of VM to Pods in K8s.Missing healthcheck probes
Not using readiness, liveness and startup probes for effective health checks can lead to unexpected error-rates in servicing requests. This usually happens when K8s forwards requests to Pods that are not ready to take those requests. For instance with Java Virtual Machine (JVM) based applications, if a request is sent to the process before the significant warm up is completed, it results in an error.
Unbounded resource limits
Deploying Pods without resource limits (like memory and CPU) can affect other applications in the same cluster, sometimes even bringing down nodes. This is also called the ‘noisy neighbour problem’ in the K8s world. To avoid it, we suggest defining quotas for resource management at the Pod level and the namespace level with defaults.
Environment specific docker images
Building different docker images for different environments, rather than effectively leveraging ConfigMaps and Secrets leads to untested images (code) reaching production - resulting in production downtime.
One kubeconfig for all
Using a single admin configuration for all team members is a huge security risk. We recommend using service accounts and role-based access for a K8s environment.
Abusing the sidecar
VM world rookies, more often than not misuse the sidecar. It’s very common in VM to have multiple applications running on the same application servers and sometimes even databases. While Kubernetes technically allows one to run multiple containers in a single Pod, it’s not recommended. Mainly because it violates the single responsibility principle of Pods. Only when the application is tightly coupled with the other container and demands it, is this alright. This anti-pattern stems from the assumption of Pods being similar to a VM.
Single cluster for all environments
Mixing both production and non-production workloads in the same cluster gives rise to issues like the misuse of namespaces to create a logical division when managing environments on a single cluster. In such a setup, when the cluster goes down, it affects all ‘environments’ and we lose the ability to perform experiments in lower environments.
Not using package manager
Leveraging self-managing K8s YAMLs initially and only for a few resources might sound alright, however it becomes cumbersome as complexity increases. Instead use tools like Helm, as it’s not only a package management tool for K8s, but also possesses features that allows one to carry out complete release management using Helm CLI.
In summary, here’s a quick overview of the mindshift required to better equip organizations and teams as they adopt a Kubernetes ecosystem:
Disclaimer: The statements and opinions expressed in this article are those of the author(s) and do not necessarily reflect the positions of Thoughtworks.