Introduction to Kubernetes
Get ready to embark on a journey into the exciting world of Kubernetes — the revolutionary open-source system transforming the way we deploy, scale, and manage containerized applications. Discover how Kubernetes works its magic by effortlessly managing your applications, making their management easier, and ensuring they’re easy to find. Get ready to see the power of Kubernetes and learn why it’s the leader of the container world!
What Is Kubernetes About?
Kubernetes is an open-source solution that helps automate deploying, scaling, and managing containerized applications. It organizes the different containers of an application into manageable groups, making them easier to handle and discover. Often called “k8s,” this abbreviation comes from the eight letters between ‘k’ and ‘s.’ The name Kubernetes comes from a Greek word meaning “helmsman” or “ship captain.” In this way, you can think of Kubernetes as a skilled captain guiding a fleet of containers.
The Importance of Kubernetes and Its Capabilities
Containers are a great way to package and run applications. In a production environment, it’s important to manage the containers running these apps and make sure they keep working smoothly. For example, if a container fails, it needs to be replaced quickly. Now, imagine a program that can handle this automatically. That’s where Kubernetes (K8s) comes in! K8s provides a strong framework to ensure distributed systems run reliably. It handles scaling, failovers, deployment patterns, and more to support your applications effectively. Here’s what K8s offers:
Service discovery and load balancing — K8s can expose a container using a DNS name or its IP address. If there is a lot of container traffic, K8s balances the load to make sure network traffic is distributed evenly, keeping deployments stable.
Storage orchestration — K8s makes it easy to attach and use your choice of storage, whether it’s local, cloud, or another type. It handles mounting automatically.
Automated rollouts and rollbacks — With K8s, you can define how your containers should be set up, and it will gradually move from the current state to the desired one. For example, K8s can create new containers, remove old ones, and smoothly transfer resources to the new container.
Automatic bin packing — K8s uses the nodes in your cluster to run containers. You specify how much CPU and memory each container needs, and K8s places the containers on the nodes in a way that makes the best use of your resources.
Self-healing — K8s restarts containers if they fail, replaces them when needed, shuts down unresponsive containers, and makes sure containers aren’t available to clients until they’re ready.
Secret and configuration management — K8s securely stores and manages sensitive data like passwords, tokens, and SSH keys. It allows you to deploy and update secrets without needing to rebuild container images, keeping them hidden in your setup.
Docker Swarm vs. Kubernetes
Docker Swarm is Docker’s open-source container orchestration system, used for clustering and scheduling containers. Compared to Kubernetes (K8s), here are the key differences:
- Docker Swarm is easier to set up, but its cluster is not as strong as K8s, which has a more complex setup but a more reliable cluster.
- K8s supports auto-scaling, while Docker Swarm doesn’t, but Docker scales containers five times faster than K8s.
- Docker Swarm doesn’t have a graphical user interface (GUI), whereas K8s has a GUI through its dashboard.
- Docker Swarm automatically balances traffic between containers in a cluster, while in K8s, you have to set this up manually.
- For logging and monitoring, Docker requires third-party tools like the ELK stack, while K8s has built-in tools for these purposes.
- Docker Swarm can share storage volumes with any container, while K8s can only share volumes with containers in the same pod.
- Docker supports rolling updates but doesn’t have automatic rollbacks. K8s handles both rolling updates and automatic rollbacks.
Kubernetes components
A Kubernetes cluster is made up of at least one master node and one or more worker nodes. The master node acts as the control centre, handling task scheduling and cluster monitoring. When you set up K8s, you create a cluster that includes worker machines called nodes, which run the containerized applications. Each cluster has at least one worker node that hosts the Pods, which are the units of the application workload. The control plane manages the worker nodes and the Pods in the cluster. In production environments, the control plane usually runs on multiple machines, and clusters have multiple nodes to ensure reliability and high availability.
1- Control Plane Components
The control plane’s different parts are responsible for making important decisions about the cluster, like scheduling tasks and quickly responding to incidents — such as creating a new pod if the required number of replicas isn’t met. These control plane components can run on any machine in the cluster. However, to keep things simple, setup scripts often run all the control plane components on a single machine, making sure that user containers don’t run on that machine.
kube-apiserver
The API server is a key part of the Kubernetes (K8s) control plane and acts as the gateway to the K8s API. It serves as the front end for the control plane. The main version of the K8s API server is called kube-apiserver. Kube-apiserver is designed to be horizontally scalable, which means you can increase its capacity by running multiple instances. This allows you to distribute traffic among several instances of kube-apiserver.
etcd
It’s a reliable and fault-tolerant key-value store used as the main data storage for all cluster information in Kubernetes (K8s). If your K8s cluster uses etcd as its data store, it’s important to have a solid backup plan to protect that data.
kube-scheduler
The kube scheduler, a key part of the Kubernetes (K8s) control plane, continuously checks for newly created pods that don’t have an assigned node and then assigns a suitable node for them to run on. The scheduling process considers different factors, such as resource needs, hardware and software limits, affinity and anti-affinity rules, data location, interference between workloads, and deadlines, to make the best decisions.
kube-controller-manager
The control plane includes an important component that runs several controller processes. While each controller works like an individual process, they are combined into a single process for simplicity. These controllers include:
- Node controller: Detects and responds to node failures.
- Replication controller: Makes sure the right number of pods is running for each replication controller in the system.
- Endpoints controller: Connects services and pods by updating the endpoints object.
- Service Account & Token controllers: Create default accounts and API access tokens for new namespaces.
cloud-controller-manager
The cloud controller manager is an important part of the Kubernetes (K8s) control plane, handling cloud-specific control tasks. It helps you connect your cluster with your cloud provider’s API while keeping the cloud-related components separate from those managing the cluster itself. The cloud controller manager only runs controllers for your specific cloud provider. If K8s is running on your own hardware or on your personal computer for learning, there won’t be a cloud controller manager. Like the kube controller manager, the cloud-controller-manager combines several independent control loops into a single process. To improve performance and ensure reliability, you can run multiple copies of the manager for better fault tolerance. The following controllers can have cloud provider dependencies:
- Node controller: For checking the cloud provider to determine if a node has been deleted in the cloud after it stops responding.
- Route controller: For setting up routes in the underlying cloud infrastructure.
- Service controller: For creating, updating and deleting cloud provider load balancers
2- Node Components
On each node, node components manage the active pods and offer the necessary K8s runtime environment.
kubelet
Kubelet works as an agent on each node in the cluster, making sure that containers in pods run correctly. It gets a set of PodSpecs in different ways and ensures that the containers defined in these PodSpecs keep running smoothly. Kubelet only manages containers created by Kubernetes — it doesn’t handle any containers outside of its control.
kube-proxy
Kube-proxy is a network proxy that runs on every node in your Kubernetes (K8s) cluster. It helps implement K8s Services by managing network rules on each node, allowing smooth network communication to your pods from both inside and outside the cluster. When possible, kube-proxy uses the operating system’s packet filtering layer; otherwise, it forwards traffic on its own to keep networking efficient.
Container runtime
The container runtime is the software that runs containers. Kubernetes (K8s) supports different container runtimes, such as Docker, containerd, CRI-O (which connects OCI-compatible runtimes with Kubelet), and any container runtime that follows the K8s CRI (Container Runtime Interface).
Conclusion
Kubernetes (K8s), highlighting its role as an open-source tool for automating app deployment, scaling, and management. We covered K8s’ important features like service discovery, automated rollouts, and self-healing, as well as comparing Docker Swarm with K8s. We also explored the key parts of a K8s cluster, including the control plane and nodes. If you want to stay updated on K8s, I recommend following Daniele Polencic and subscribing to the Learn Kubernetes weekly newsletter.
As a DevSecOps enthusiast, I hope you enjoy this article. In this column called “Mindful Monday Musings” here every Monday, I will share articles on Dev(Sec)Ops and Cloud. You can support M3 (aka Mindful Monday Musings) by following me and sharing your opinions. Please send me your contributions, criticisms, and comments, it would make me glad.