Kubernetes
The components of Kubernetes
Pods A pod is the smallest deployable unit in Kubernetes. Whereas, for example, the smallest equivalent is a container in Docker. A pod is a collection of one or several containers which have access to a shared resource pool of memory, storage and network resources [1]. They usually contain one or more app containers which are dependent on each other to achieve a useful purpose or service [2].
Cluster A cluster is provided to you when Kubernetes is deployed. A cluster is simply a set of nodes (machines) which consist of at least one worker node, These nodes run containerised applications. Clusters usually run multiple nodes [3].
Core components
Control plane The control plane manages worker nodes and pods in a cluster. This control plane may run across several systems. This provides availability and fault tolerance. It is also responsible for the management of the cluster; such as scheduling, and detecting cluster events [3].
API Server The API acts as an interface to the Kubernetes cluster and acts as the front-end for the control plane [3]. It interacts between all the components such as the worker nodes and the control plane.
Scheduler The scheduler is responsible for assigning work to the worker nodes. It maintains the worker node's performance and resource capacity so it adheres to a certain threshold [4]. It looks for new pods without an assigned node and identifies a node to execute upon [3].
etcd This acts as a database record for the state and configuration of the cluster [5]. The data is stored as key-value pairs and ensures a backup for all the cluster data.
Node worker components
Kubelet A kubelet is a program executing upon the worker node and is responsible for enforcing directions coming from the master node cluster onto the pods. It also reports the status and condition of the worker node loads [5].
Kube proxy A kube proxy runs on each node and maintains network rules. These rules may permit communication from pods inside or outside a cluster [3].
Container runtime This is the software which is responsible for containers to run and execute; an example is Docker.
Workloads
The workloads are containerised applications within Kubernetes. The set of containers which are packaged and grouped (and which belong to the same application) are deployed and managed within Kubernetes. A node is responsible for running these workloads [6].
Why use Kubernetes?
The core Kubernetes concepts have been explained, however, what are some of the purposes/reasons why Kubernetes should be used?
-
Management: Kubernetes makes running complex applications far simpler. Clusters are load-balanced and containers are, for the most part, automatically managed. Kubernetes reduces the need to manage applications so developers can spend more time building them [7].
-
Scalability: Kubernetes is scaleable. Applications and infrastructure and its resources can be scaled up and down, based on the needs of the organisation [8].
-
Flexibility: Kubernetes allows flexibility in cloud environments, allowing the operation of applications without performance losses. This is assisted by containerisation, meaning resources are utilised effectively [8].
References
[2] Kubernetes: Application Coupling
[3] Kubernetes Components | Kubernetes
[4] Sensu | How Kubernetes works
[5] How Kubernetes works | InfoWorld
[6] How Do Applications Run on Kubernetes? - The New Stack
[7] Communication between Nodes and the Control Plane | Kubernetes