Kubernetes is an open-source container management platform that automates container scaling and descaling, load balancing, and deployment. It’s what you need to create an identical copy of an application stack that can be used in dynamic and multiple environments. This is made possible through the various components that make up the Kubernetes architecture.
This container orchestration tool can create containerized applications that are solid, future-ready and scalable. It was created by Google and designed to:
- Automate replication and deployment of containers
- Scale in or out container clusters
- Balance load over a group of containers
- Roll upgrades to application containers without affecting other features
Kubernetes also controls the exposure of network ports to risks or systems found outside of the cluster.
To better understand how this containerization framework works, it’s best to understand the key components of its architecture.
Kubernetes master node refers to the custom control panel or the control plane where all decisions are made, from scheduling to responding to events. It also controls which container is started on which node based on certain parameters.
Its key components are API server, data store, controller manager, cloud controller manager, and scheduler.
Each component has a specific function. The API server, for example, is the only control panel component that you, as a user, interact with. The control manager, on the other hand, takes care of all the controllers that are responsible for performing routine tasks.
The framework may also have a dashboard with a simplified web UI. Here, users will be able to interact with the API server.
There is always only one master node that controls everything.
These provide the Kubernetes runtime environment by running the actual containerized applications. It is made up of a primary node agent called kubelet.
These are a group of one or more containers that contain specific commands on how containers should run. It shares the same storage/network with other pods.
In the event of scaling or descaling a container, pods are added or removed, respectively.
Now, the question is: how do all these parts work to automate operation, deployment, and scaling of containerized applications?
It starts with a cluster blueprint or deployment file that contains the configurations for how pods will run a container. Feed the file to the API and leave it up to the Kubernetes cluster services to communicate with the workers/container through the kubelet nodes.
The number of pods and workers in a container will depend on the configurations you set.
In defining a cluster blueprint, there are several building blocks that are considered, but the most important ones are the pods, services, namespaces, persistent volumes, ingress rules, network policies, ConfigMaps and secrets, and controllers.
Despite the many components that make up the Kubernetes architecture, it is not a platform that is ready to use. It requires an entire stack that will fill in the gaps and allow the architecture to run at scale. This is where an open-source platforms such as Kublr can be very useful to add the operational and governance features that enable the management of Kubernetes clusters in different environments, including cloud or air-gapped environments. If you’re interested in such approach you can find more details on Kublr’s website.