Open In App

Why Kubernetes? Benefits of using Kubernetes

Last Updated : 20 Dec, 2023
Improve
Improve
Like Article
Like
Save
Share
Report

The popularity of container orchestration technologies specially Kubernetes comes from its use cases and the problems that it solves. Kubernetes is the most popular container orchestration and is widely used by

Cloud Native Computing Foundation (CNCF), the foundation to which Kubernetes as a project was donated by Google, estimates that about 92% businesses that uses any Container Orchestration tool is using Kubernetes. In this article we are discussing the benefits of Container Orchestration tools especially Kubernetes. We will be discussing the core use cases of Container Orchestration technologies like Scalability, Disaster Recovery, etc. rather than facts like Kubernetes is Open-sourced etc.

Why Kubernetes?

To understand why we need Kubernetes we need to first understand Containers. Once we have our application in various containers, we will now have to manage these Containers to ensure that the application is available to its users without any downtime. The key feature of containers is that they’re small and light enough that we use them within our development environment. Using containers within our development environment gives us high confidence that our production environment is as similar as possible to that development environment.

Benefits of using Kubernetes

Kubernetes has tons of advantages when it comes to container orchestration. The benefits of Kubernetes depend of who is using it and how, but some of the most important features or benefits provided by Kubernetes are as follows:

1. High Availability and Scalability

(i) High Availability

In Deployment, High availability refers to the ability of an application to be accessible to the users even when the servers are facing disruptions like a server crash. Using Kubernetes or Container Orchestration in general, we can make our applications achieve high availability where even if our Pod dies, another Pod can be created to take its place in the Cluster.

(ii) High Scalability

High Scalability refer to whether your deployment system is capable to efficiently adapt to the increase or decrease in the number of request coming to the server. An application can be called scalable if it works fine when it has 10 concurrent visitors visiting the application, as well as when 1,000 visitors are using the application and the servers doesn’t crash.

Scalability can be achieved by Horizontal or Vertical Scaling. Kubernetes supports both Horizontal or Vertical Autoscaling. With Kubernetes, Horizontal Scaling can be done in node and the pod level while Vertical scaling is only possible in the pod level.

How does Kubernetes Achieve it?

For Example, we have two worker nodes of Kubernetes Cluster – Server one and Server two. Each of these server is holding a replica of an application called “my app” and a database application. We also have an ingress component which basically handles every incoming request to the application so if someone accessed “my app” website on a browser, the request would come in to ingress. Ingress is load-balanced so we have replicas of ingress on multiple servers. Ingress will forward that request to the Service for my application. Service is a load balancer that will direct that request to the respective replicas of the Pod.

In this entire process – from entry point of the request into the cluster till the last end point, every component is replicated and load-balanced which means that in this whole setup there is no bottleneck where a request handling could stop the whole application and make the responses slower for a user. Now even if the Server two gets completely crashed and all the Pods that were running on it died. we would still have replicas of our application running. Therefore, there will be no downtime. Also, in the meantime a Kubernetes master process called Controller Manager would actually schedule new replicas of the died Pods on another worker node let’s call it Server 3. And recover the previous load-balanced and replicated application state. While the node servers actually do the work of running the applications, the master processes on the master nodes actually monitor the cluster state and make sure that if a Pod dies it automatically gets restarted, if something crashes in the cluster – it automatically gets recovered.

An important Master component that is used to manage the cluster mechanism to run properly is the etcd store. etcd store stores the cluster state (like the resources available on the Node and the Pod state etc) at any given time.

2. Disaster Recovery

Disaster recovery as the name suggests is set of practices to ensure that operations are restored after any disruption happens in our deployment system. This disruption could be system failures, natural disasters, or human errors.

For Container orchestrators like Kubernetes, disaster recovery can be broken down into two phases:

  1. Backup: this is the process of preserving the data
  2. Recovery: this is the process of restoring the system data after some disaster occurred.

How does Kubernetes achieve it

How Kubernetes achieves Disaster Recovery mechanism can be understood by the help of etcd. etcd always has the current state of the cluster. it is a crucial component in disaster recovery of Kubernetes clustered applications. The way disaster recovery mechanism can be implemented is by creating etcd backups and storing them into remote storage. These backups are in form of etcd snapshots. Kubernetes does not manage or take care of backing updated etcd snapshots on remote storage – this is the responsibility of the Kubernetes cluster administrator. This storage could be completely outside the cluster on a different server or maybe even cloud storage. etcd does not store database or application data. That data is usually also stored on remote storage where the application Pods have reference to the storage so that they can read and write the data. This remote storage just like the etcd snapshot’s backup location isn’t managed by Kubernetes so it must be reliably backed up and stored outside of the cluster.

So now if the whole cluster crashes including the worker nodes and the master nodes. It would be possible to recover the cluster state on completely new machines with new worker and master node using the etcd snapshot and the application data. We can even avoid any downtime between the cluster crash and a new cluster creation by keeping a backup cluster that can immediately take over when the active cluster or the current cluster crashes or dies.

3. Easier Replication

Replication simply means creating copies of the application so that the application can be scaled. Upon creating multiple replicas of an application, the request coming to it can be divided among these replicas and this is how the application can be scaled.

How Kubernetes Achieves it

Replication is also possible in services like AWS (Amazon Web Services) but how Kubernetes stands out is that Replication is made much easier using Kubernetes. The only thing that you have to do is just declare how many replicas of a certain application be it your own application or a database application and the Kubernetes component takes care of actually replicating it.

4. Self Healing

In terms of deployment, Self healing is a feature that allows the system to notice if there are any failures or issues in the system and automatically recover it without needing any intervention from the administrator.

How Kubernetes Achieves it

Kubernetes has self-healing feature. This means that if a Pod dies there, should be processes that monitor the state and detects that a replica died and automatically restart a new one and again.

5. Smart Scheduling

In Kubernetes, Smart Scheduling is a feature that provides an advanced approach for the scheduling of tasks where we as administrators only have to add new replicas and Kubernetes automatically figures out where these replicas should be running.

How Kubernetes Achieves it

Smart scheduling is a feature unique to Kubernetes. If we have 50 worker servers that our application containers will run on. Kubernetes provides us a feature that we don’t have to decide where to run our container. We just say that we need a new replica of a Pod and Kubernetes smart scheduler basically goes and finds the best fitting worker kuberneets Node among those 50 worker Nodes to schedule our container. This is done by comparing how much resources of a worker node are available.

Conclusion

Kubernetes has a tons of advantages when it comes to container orchestration. It is the most widely used container orchestration tool across industries. It offers features like High availability, high scalability, smart scheduling, smart healing and much more. Make sure to go though these features, it will really help you understand why Kubernetes has came out as an industry standard.

FAQs On Kubernetes

1. What do you mean by high availability?

High availability refers to the ability of an application to be accessible to the users even when the servers are facing disruptions like a server crash.

2. Does Kubernetes supports Disaster Recovery?

Yes, Kubernetes does supports Disaster Recovery

3. Who developed Kubernetes?

Kubernetes was developed by Google and later donated to CNCF (Cloud Native Community Foundation)

4. When was Kubernetes created?

Kubernetes was created on 9th September, 2014 by Google.

5. What is etcd in Kubernetes?

etcd is a Kubernetes component that is used as a key value store for the cluster data.



Like Article
Suggest improvement
Share your thoughts in the comments

Similar Reads