Skip to content
Related Articles
Open in App
Not now

Related Articles

How To Set Up Master-Slave Architecture Locally in Kubernetes?

Improve Article
Save Article
  • Last Updated : 12 Jan, 2023
Improve Article
Save Article

Pre-requisite: Kubernetes

Kubernetes is an open-source container orchestration system for automating containerized applications’ deployment, scaling, and management. It was originally designed by Google and is now maintained by the Cloud Native Computing Foundation (CNCF). Kubernetes helps you deploy and manage containerized applications at scale more efficiently and resiliently. It provides features such as declarative configuration, self-healing, and horizontal scaling.

Components in Kubernetes Architecture

  • Master Node: The master node is the central control plane of a Kubernetes cluster. It is responsible for managing the cluster and the various nodes in it. The master node consists of several components, including the API server, etcd, scheduler, and controller manager.
  • Worker Node: Worker nodes are the machines (virtual or physical) where your applications are deployed and run. These nodes run the required services to execute the containers, such as container runtime and kubelet.
  • Pod: A pod is the smallest deployable unit in Kubernetes. It is a logical host for one or more containers. All containers in a pod run on the same node and share the same network namespace.
  • Service: A service is an abstraction that defines a logical set of pods and a policy by which to access them. A service can be exposed as a load balancer or a DNS entry.
  • Volume: A volume is a persistent storage for your application data. It allows your application to retain data even if the pod or the node fails.
  • Namespace: A namespace is a virtual cluster inside a physical cluster. It is used to divide resources and limit access to resources within a cluster.


Minikube is a tool that allows you to run Kubernetes locally on your computer. It is a lightweight, easy-to-use Kubernetes cluster that is great for developing and testing applications. Minikube runs a single-node Kubernetes cluster inside a Virtual Machine (VM) on your local machine. It is a great way to get started with Kubernetes and learn how it works.

To use Minikube, you need to install a hypervisor (such as VirtualBox or VMware Fusion) and the Minikube binary on your local machine. Then, you can start Minikube and deploy your applications to the cluster. Minikube also provides a built-in dashboard that you can use to manage and monitor your applications.

Minikube is a useful tool for developers who want to test their applications in a Kubernetes environment without the need for a full-fledged cluster. It is also useful for learning and experimentation with Kubernetes.


  • Install a Hypervisor: Minikube requires a hypervisor to run a virtual machine on your local machine. You can use a popular hypervisor like VirtualBox or VMware Fusion.
  • Install Minikube: Download the Minikube binary and install it on your machine. On macOS, you can use Homebrew to install Minikube: brew install minikube.
  • Start Minikube: Once you have installed Minikube, you can start it using the minikube start command. This command will create a new virtual machine and start a single-node Kubernetes cluster inside it.
  • Verify the Installation: You can verify the installation by checking the Minikube status and accessing the dashboard. To check the status, use the minikube status command. To access the dashboard, use the minikube dashboard command.

That’s it! You have successfully installed Minikube on your local machine. You can now deploy your applications to the Minikube cluster and start experimenting with Kubernetes.

Note: These are the basic steps to install Minikube. The exact steps may vary depending on your operating system and the hypervisor you are using.

Let’s test whether Kubernetes is working or not with an example

Deploying Ngnix in Minikube Cluster

Step 1: Create nginx-deployment.yaml file

Create nginx-deployment.yaml file


  • This Deployment file defines a deployment called nginx-deployment that consists of three replicas of an NGINX container. The containers will be labeled with the app: nginx and will be running the nginx:1.14.2 image. The container will be listening on port 80.
  • To create the deployment in your Kubernetes cluster, you can use the kubectl apply command
$ kubectl apply -f  nginx-deployment.yaml

Step 2: Create Nginx-Service.yaml to Expose the deployment

Expose the deployment


  • This service definition creates a load balancer service called nginx-service that exposes the nginx-deployment deployment on port 80. The service will automatically create a load balancer and assign it a public IP address. You can then use the public IP address to access the NGINX application from outside the cluster.
  • To create the service in your Kubernetes cluster, you can use the kubectl apply command:

Step 3: The kubectl get all command retrieves all the pods, services, deployments, replica sets, and pods in the cluster. It shows the name, ready status, age, and other details of each resource.

$ kubectl get all
kubectl get all


Step 4: Generating URL to Access our Webpage

The minikube service command is used to access the service in a Minikube cluster. It opens the service in a web browser or prints the service’s URL to the console. You can also use the –url flag to print the service’s URL to the console, regardless of the service type.

minikube service command


Step 5: Accessing our service through a web browser


nginx webpage 

My Personal Notes arrow_drop_up
Related Articles

Start Your Coding Journey Now!