Overview
In our previous blog post, we discussed the differences between monolithic and microservices architecture, as well as the various deployment models used in software development. We covered traditional deployment using physical servers, modern deployment using virtual machines (VMs), and the latest trend of deploying applications in containers. We also highlighted the benefits of containerization, such as improved scalability, flexibility, and easier management.
What is Kubernetes?
It’s a container-centric management environment. Kubernetes automates the deployment, scaling, load balancing, logging, monitoring, and other management features of containerized applications.
Kubernetes supports declarative configurations. This allows you to administer your infrastructure by describing the desired state you want to achieve. Instead of issuing a series of commands to achieve that desired state.
Kubernetes also allows imperative configuration, which allows you to issue commands to change the system’s state. But administering Kubernetes at scale imperatively would be a big missed opportunity.
Kubernetes supports different workload types. It supports stateless applications, such as Nginx or Apache web servers, and stateful applications where user and session data can be stored persistently. It also supports batch jobs and daemon tasks.
Because it’s open-source, Kubernetes also supports workload portability across on-premises or multiple cloud services providers such as AWS EKS or Google Cloud GKE and others. This allows Kubernetes to be deployed anywhere. You can move Kubernetes workloads freely without vendor lock-in.
Understanding the Architecture of Kubernetes
A Kubernetes cluster consists of a Control Plane and worker machines called Nodes. The control plane and nodes make up the Kubernetes cluster orchestration system.
The Control Plane
kube-apiserver: At the heart of the Kubernetes control plane is the API server, which acts as the frontend server for handling API requests. The API server exposes the Kubernetes API and serves as the entry point for all administrative tasks in Kubernetes.
The most widely used implementation of the Kubernetes API server is kube-apiserver, which is designed to scale horizontally by deploying multiple instances. This means that you can run multiple instances of kube-apiserver and distribute traffic between them to improve scalability and availability.
etcd: The etcd database is a crucial component of the Kubernetes cluster, responsible for storing and managing the state of the system. This includes all the cluster configuration data, as well as more dynamic information such as node membership and Pod placement.
Although you never interact directly with etcd, it plays a vital role in ensuring the stability and reliability of the Kubernetes cluster. The kube-apiserver component interacts with etcd on behalf of the rest of the system, providing a secure and consistent means of accessing and modifying the state of the cluster.
kube-scheduler: Kube-scheduler plays a critical role in the Kubernetes cluster by scheduling Pods onto available nodes. It does this by evaluating the resource requirements of each individual Pod and selecting the most suitable node based on a range of constraints such as hardware, software, and policy.
Although kube-scheduler doesn't launch Pods on nodes, it writes the name of the selected node into the Pod object whenever it discovers a Pod that doesn't have a node assignment.
To make informed scheduling decisions, kube-scheduler maintains a global view of the state of all the nodes in the cluster. It also adheres to any constraints that you define, such as memory requirements or affinity specifications. You can specify affinity rules to group pods together and preferentially run them on the same node. Similarly, anti-affinity rules ensure that Pods do not run on the same node, promoting high availability and fault tolerance.
kube-controller-manager: kube-controller-manager continuously monitors the state of a cluster through Kube-APIserver. Whenever the current state of the cluster doesn’t match the desired state, kube-controller-manager will attempt to make changes to achieve the desired state. It’s called the “controller manager” because many Kubernetes objects are maintained by loops of code called controllers. These loops of code handle the process of remediation.
cloud-controller-manager: A Kubernetes control plane component that embeds cloud-specific control logic. The cloud-controller-manager only runs controllers that are specific to your cloud provider.
If you manually launched a Kubernetes cluster on Amazon Web Services (AWS), the cloud-controller-manager would be responsible for managing controllers that interact with the underlying AWS infrastructure. This includes bringing in AWS features such as load balancers and storage volumes when needed, enabling you to leverage the full power of AWS within your Kubernetes cluster.
Node Components
kubelet: The kubelet, operates on every node, and is responsible for monitoring and managing containers within a Pod. It performs this task by processing a set of PodSpecs that are obtained through different means and subsequently verifies that the containers specified in those PodSpecs are up and running, and operating optimally.
It's worth noting that containers that were not created by Kubernetes are not within the scope of kubelet's management. Its focus is solely on ensuring the health and well-being of containers that fall within the purview of Kubernetes.
kube-proxy: kube-proxy is a network proxy that runs on each node in a Kubernetes cluster. Its primary role is to maintain network rules on nodes to enable network communication to your pods from network sessions inside or outside your cluster. kube-proxy implements part of the Kubernetes Service concept by forwarding traffic to the appropriate backend pods based on the service endpoint.
Container runtime: The container runtime is a software component that enables the execution of containers. It is responsible for managing and running containerized applications within a host environment.
What's Next:
Kubernetes Objects