Kubernetes is an open-source container orchestration platform designed to automate the deployment, scaling, and management of containerized applications. Donated to the Cloud Native Computing Foundation (CNCF) in 2014, Kubernetes provides a framework for running and managing containerized applications across a distributed infrastructure. Kubernetes is designed to be flexible, scalable, and portable, and it provides a wide range of features to enable developers to deploy, manage, and scale their container based applications, with ease.
Kubernetes clusters are composed of several main components, each serving a specific purpose in the overall architecture of the kubernetes project platform. Regardless of your hosting environment, meaning fully managed Kubernetes clusters or self hosted Kubernetes clusters, the main components will be the same but how they are exposed to you may be a bit different depending on your circumstances.
In this blog post, we will discuss the main components of Kubernetes and their functions.
Related Articles
Master Node
The master node is the control plane of the Kubernetes cluster. It is responsible for managing the overall state of the cluster and coordinating the activities of the worker nodes. The master node consists of several components, including the Kubernetes API server, etcd, the Kubernetes scheduler, and the Kubernetes controller manager.
The Kubernetes API server is the front-end interface for the Kubernetes control plane. It exposes a RESTful API that allows users to interact with Kubernetes nodes, with the Kubernetes cluster, create and manage objects such as pods, services, and deployments.
Etcd is a distributed key-value store used to store the configuration data and the state of the Kubernetes cluster. It provides a reliable and highly available storage layer for the control plane, allowing the master node to maintain a consistent view of the cluster state.
The Kubernetes scheduler is responsible for scheduling the pods on the worker nodes based on their resource usage, availability, workload requirements, and other factors. It ensures that the pods are scheduled to the most appropriate node, taking into account factors such as CPU and memory usage, network bandwidth, and storage capacity.
The Kubernetes controller manager is responsible for managing the various controllers that are responsible for maintaining the desired state of the cluster. It ensures that the cluster is running smoothly and that the pods running containers are scaled up or down based on the workload demands.
Worker Node
The worker node is where the application containers run. Each worker node hosts one or more pods, which are the smallest deploy-able units in Kubernetes. A pod can contain one or more containers that share the same network and storage resources. The worker node consists of several components, including the kubelet, the container runtime, and the kube-proxy.
The kubelet is the primary agent running on the worker node. It is responsible for communicating with the master node and ensuring that the pods are running correctly. The kubelet ensures that the containers are running, monitors their health, and restarts them if necessary.
The container runtime is responsible for running the containers. Kubernetes supports several container runtimes, including Docker, CRI-O, and containerd. The container runtime pulls the container images from a registry, creates the containers, and manages their lifecycle.
The kube-proxy is responsible for managing the network connectivity between the pods and the services in the cluster. It provides a network abstraction layer, allowing the pods to communicate with each other and with external resources.
Kubernetes Services
Kubernetes services are used to provide network connectivity and load balancing for the pods running in the cluster. Services are implemented using the kube-proxy, which provides a virtual IP address and port for the service. When a pod needs to communicate with a Kubernetes service, it sends the traffic to the service's virtual IP address and port. The kube-proxy then forwards the traffic to the appropriate pod based on the load balancing algorithm configured for the service.
Kubernetes supports several types of services, including ClusterIP, NodePort, and LoadBalancer. ClusterIP services are only accessible within the cluster, NodePort services expose the service on a specific port on each node, and LoadBalancer services expose the service in public cloud using a load balancer provided by the cloud provider.
Kubernetes Controllers
Kubernetes controllers are used to manage the desired state of the cluster. Controllers are responsible for ensuring that the number of pods matches the desired state, scaling up or down as necessary based on the workload demands. Kubernetes provides several built-in controllers, including the Replication Controller, ReplicaSet, Deployment, and StatefulSet.
The Replication Controller ensures that a specific number of replicas of a pod are running in the cluster. It is an older controller that has been replaced by the ReplicaSet controller in newer versions of Kubernetes.
The ReplicaSet controller is an updated version of the Replication Controller that provides more advanced capabilities for managing the desired state of the cluster. It ensures that a specified number of replicas of a pod are running in the cluster and allows for more advanced scaling and rolling update strategies.
The Deployment controller is used to manage the deployment of new versions of an application. It allows for rolling updates, canary deployments, and other advanced deployment strategies. The Deployment controller ensures that the new version of the application is rolled out gradually and that the previous version is phased out only after the new version is verified to be working correctly.
The StatefulSet controller is used to manage stateful applications that require unique network identities and persistent storage. It ensures that each pod in the set has a unique hostname and persistent storage, making it suitable for running stateful applications such as databases or messaging systems.
Kubernetes Ingress Controller
A Kubernetes ingress controller is a component that manages external access to services running in a Kubernetes cluster. It acts as a reverse proxy, routing traffic from the outside world to the appropriate service within the cluster based on the incoming request's URL and other rules defined by the user.
Ingress controllers enable users to expose their applications to the internet, load balance traffic, and apply SSL/TLS encryption to secure the communication between the clients and the services. Popular Kubernetes ingress controllers include Nginx, Traefik, and Istio.
Kubernetes Deployment
A Kubernetes deployment manifest is a declarative configuration file that specifies how Kubernetes should create and manage a deployment. The manifest defines the desired state of the deployment, including the number of replicas, the container images to use, and the configuration settings for each container. When Kubernetes receives the manifest, it uses the information contained in the file to create and manage the deployment.
The deployment manifest is a YAML file that is typically stored in a version control system such as Git, and is versioned along with the code that it deploys. This allows developers to track changes to the deployment configuration over time, and roll back to a previous version of the deployment if necessary.
The deployment manifest includes several key components:
-
Metadata: The metadata section includes information such as the name of the deployment, labels that can be used to select the deployment, and annotations that provide additional information about the deployment.
-
Spec: The spec section defines the desired state of the deployment. This includes the number of replicas that should be created, the container image to use, and the configuration settings for each container.
-
Selector: The selector section defines the criteria used to select the pods that are part of the deployment. This is used to ensure that only the desired replicas are running, and to enable rolling updates and rollbacks.
-
Template: The template section defines the pod template used to create the pods that are part of the deployment. This includes the container image to use, the configuration settings for each container, and any other resources that are required, such as volumes or secrets.
By defining the desired state of the deployment in a manifest, Kubernetes can automatically manage the deployment to ensure that the desired state is always maintained. For example, if a pod crashes, Kubernetes will automatically restart the pod to ensure that the desired number of replicas is always running.
Kubernetes Pod
A Kubernetes pod is the smallest and simplest unit of deployment in Kubernetes. It represents a single instance of a running process in the cluster and consists of one or more containers that share the same network namespace and storage volumes. Pods are used to deploy and manage containerized applications in Kubernetes, and they provide several benefits, including isolation, scalability, and portability.
Kubernetes pods can be managed and controlled by higher-level abstractions such as deployments, replica sets, and stateful sets, which provide additional features for managing the full lifecycle management and scaling of pods.
Kubernetes Volumes
Kubernetes volumes provide persistent storage for the containers running in the cluster. Volumes are used to store data that needs to survive container restarts or rescheduling. Kubernetes supports several types of volumes, including emptyDir, hostPath, and persistentVolumeClaim.
The emptyDir volume is a temporary storage volume that is created when a pod is created and is deleted when the pod is terminated. It is useful for storing temporary data such as logs or temporary files.
The hostPath volume allows a pod to mount a directory from the host system into the container. It can be used to share data between the host and the container or to provide access to host operating system-specific resources such as device files or system logs.
The persistentVolumeClaim volume is used to request a specific amount of storage from the cluster's storage system. It allows pods to access persistent storage that survives pod restarts and rescheduling. The persistentVolumeClaim volume is backed by a persistent volume, which is a physical storage device provisioned by the cluster's storage system.
Conclusion
Kubernetes is a complex system composed of several components that work together to provide a powerful and flexible container orchestration platform. The main components of Kubernetes include the master node, the worker node, Kubernetes services, controllers, and volumes.
These components provide the essential functions required to manage containerized applications across a distributed infrastructure. Understanding the roles and functions of each component is essential to effectively deploy and scale containerized applications in a Kubernetes cluster.