Open In App

Top 10 Kubernetes Tricks You Didn’t Know

Last Updated : 12 Mar, 2024
Improve
Improve
Like Article
Like
Save
Share
Report

Kubernetes is a popular tool for building and running applications in the cloud. Kubernetes provides a stable framework for managing containerized applications. The simple and stable features of this framework have made it a leader in container orchestration. Many of the senior developers also are not aware of some less-known Kubernetes features.

Kubernetes Tricks You Didn't Know

Most people know the basics, but there are hidden features that can make it even better. This article will show you 10 tricks you can do with Kubernetes that you might not already know.

What is Kubernetes?

Kubernetes, also called K8s for short, is a free and open-source tool that helps you manage software built using containers. We can think of Kubernetes as a system that automatically puts your containers in the right places, scales them up or down when needed, and keeps them running smoothly. It was created by Google, now it has become a standard way in the industry to manage containerized applications of computers working across many devices.

Kubernetes makes it easier to manage containerized applications. Instead of worrying about the underlying servers and settings, you just tell Kubernetes what you want your application to do. Kubernetes can then automatically scale your application up or down as needed, find the different parts of your application for each other, and keep things running smoothly even if there are problems. Because it’s easy to use and works with many different cloud providers and tools, Kubernetes is a popular choice for building and running modern applications.

Get ready to revolutionize your process & take your career to the next level with this DevOps – Live course! Check Out: DevOps Engineering – Planning to Production

Top 10 Kubernetes Trick You Didn’t Know

1. Pod Disruption Budgets

Your application consists of many small parts known as pods, these pods are like little containers that contain the application’s code. During maintenance or updates disruptions can occur that can take down the application. A Pod Disruption Budget (PDB) helps you control how many of these pods can be disrupted at the same time. This ensures that your application stays available even when some maintenance is happening.

apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: example-pdb
spec:
maxUnavailable: 1
selector:
matchLabels:
app: example-app

In the above example a YAML defines a pod disruption budget named “example-pdb” that allows at most one pod with the label “app: example-app” to be unavailable during a scaling or disruption event.

2. Affinity and Anti-Affinity Rules

Affinity and Anti-affinity rules are used to determine how computer programs or applications (referred to as “pods“) are placed on servers (referred to as “nodes“).

Affinity Rules: These rules are used to express the preference of pod placements that is it states that a particular pod is configured to be placed on a node with a specific label.

affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: example-key
operator: In
values:
- example-value

The YAML code snippet above states that when scheduling this pod, make sure to find a node that has a label called ‘example-key’ with a value of ‘example-value’. and only put this pod on that kind of node.

Anti-Affinity Rules: These rules decide when a pod should avoid being put on a particular server. These rules help the system choose servers that are good or bad matches for running specific applications.

Check Out: Difference between YAML and JSON

3. Resource Field Selectors

Resource Field Selectors are tools that help us sort and locate pods based on some particular resource criteria. We use the “kubectl” command to make use of these selectors. Let us see an example to better understand:

kubectl get pods --field-selector=status.phase=Running

This command helps you get a list of active pods. It looks at their “status.phase” field and specifically shows the ones that are currently in the “Running” phase. Basically, it helps us to find and display pods that are actively running, making it easy and efficient to see which ones are operational.

4. Resource Optimization for Enhanced Cluster Performance

When we are working with Kubernetes, it’s really important to make sure that our cluster is using the resources in the best way possible. This helps the cluster run smoothly and reliably. We make sure that clusters use their resources effectively, by setting up clear guidelines on the amount of memory and CPU each application is allowed to use by setting well-defined limits and requirements for resource usage. Let us look at the below code to understand better:

apiVersion: v1
kind: Pod
metadata:
name: optimized-pod
spec:
containers:
- name: critical-app
image: critical-app-image
resources:
requests:
memory: "200Mi"
cpu: "200m"
limits:
memory: "500Mi"
cpu: "500m"
- name: standard-app
image: standard-app-image
resources:
requests:
memory: "50Mi"
cpu: "50m"
limits:
memory: "150Mi"
cpu: "300m"

  • critical-app has higher resource requests and limits compared to standard-app because it’s more critical therefore it can use up to 200 megabytes of memory and 200 milliCPU for normal operation, but it can go up to 500 megabytes of memory and 500 milliCPU if needed.
  • standard-app has fewer resource requirements, so we make sure it doesn’t take all the resources needed by critical applications. In case of normal operations, it can use up to 50 megabytes of memory and 50 milliCPU and can go up to 150 megabytes of memory and 300 milliCPU if necessary.

5. Pod Preset

Pod Presets makes the process of configuring pods in Kubernetes very easy by automatically including setup details. Let us look at an example to understand Pod Preset better:

apiVersion: settings.k8s.io/v1alpha1
kind: PodPreset
metadata:
name: example-podpreset
spec:
selector:
matchLabels:
app: example-app
env:
- name: DB_HOST
valueFrom:
secretKeyRef:
name: db-secret
key: db-host

Pod Presets act like a convenient tool that automatically adds particular setup information (like the DB_HOST environment variable) from a secret named ‘db-secret’ to any pod labeled with ‘app: example-app’. This makes it easy to set up pods consistently without needing to do it manually, making the deployment process smoother.

6. Custom Resource Definitions (CRDs) for Operators

In Kubernetes, we can make a Custom Resource Definition (CRD) which is like creating a detailed plan for a special type of thing that fits your exact needs. If we talk technically it can be said as creating a Custom Resource Definition for an operator. Let’s look at an example with a MySQL operator to understand how we can create a custom resource definition:

apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: mysqls.example.com
spec:
group: example.com
names:
kind: MySQL
plural: mysqls
scope: Namespaced
versions:
- name: v1alpha1
served: true
storage: true

The above YAML code snippet creates a CRD named ‘mysqls.example.com’ for a special MySQL resource in the ‘example.com’ group.

  • Group: Specifies the category, here it’s ‘example.com.’
  • Names: Describes how this resource is identified; it’s named ‘MySQL,’ with instances called ‘mysqls.’
  • Scope: Decides if it works throughout the whole cluster (‘ClusterScoped’) or only in specific namespaces (‘Namespaced’). Here, it’s ‘Namespaced,’ limiting instances to individual namespaces.
  • Versions: Specifies versioning, using ‘v1alpha1,’ the first alpha version. ‘served: true’ and ‘storage: true’ mean it’s ready to use and good for storing data.

It simplifies managing MySQL in Kubernetes, making it more adaptable and controlled. It acts as a custom toolkit for efficiently handling MySQL tasks in the Kubernetes cluster.

7. Horizontal Pod Autoscaling (HPA) with Custom Metrics

In Kubernetes, Horizontal Pod Autoscaling (HPA) lets you change the number of active pods as needed. Let us see an example of setting up HPA that involves custom metrics, specifically using the custom-metrics-api.

apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: example-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: example-deployment
minReplicas: 2
maxReplicas: 10
metrics:
- type: Pods
pods:
metricName: example-metric
targetAverageValue: 50

The above YAML configuration sets up Horizontal Pod Autoscaling (HPA) for a deployment named ‘example-deployment.’ The working of each part:

  • scaleTargetRef: Points to the ‘example-deployment’ to apply HPA.
  • minReplicas: Sets the minimum pods (2) for a baseline level of availability.
  • maxReplicas: Caps the maximum pods at 10 to avoid resource overload.
  • metrics: Focuses on a custom metric named ‘example-metric.’
    • type: Pods: Indicates scaling based on pod-specific metrics.
    • metricName: example-metric: Names the custom metric as ‘example-metric.’
    • targetAverageValue: 50: Sets the target value for ‘example-metric.’ HPA adjusts pod numbers if the metric exceeds 50.

This config tells HPA to dynamically change pod numbers in ‘example-deployment’ based on the ‘example-metric,’ aiming for an average value of 50. This ensures efficient resource use and responsiveness to application needs in a Kubernetes setup.

8. Init Containers for Pre-Startup Tasks

We can use init containers in Kubernetes to execute tasks before the main application container starts running. Let us see a practical example expressed in YAML to better understand the concept:

apiVersion: v1
kind: Pod
metadata:
name: example-pod
spec:
containers:
- name: main-container
image: example-app
initContainers:
- name: init-container
image: init-image
command: ['sh', '-c', 'echo Initializing...']

The above YAML code snippet is used to set up a pod named ‘example-pod’ in Kubernetes. It has two containers: ‘main-container‘ for the main app and ‘init-container‘ for pre-startup tasks. The init container runs the command [‘sh’, ‘-c’, ‘echo Initializing…’] before the main app starts. This division ensures essential setup tasks are done before the main app runs, making the deployment more orderly and controlled.

9. Kubectl Plugins

Kubectl plugins are special tools that enhance the capabilities of our regular Kubectl commands. They act as add-ons that give extra powers to kubectl, to use these plugins there is a tool called the krew plugin manager. It’s like a friendly assistant for handling these plugins. Kubectl Plugins upgrade our toolkit to make managing our applications smoother and more convenient.

kubectl krew install get-all
kubectl get-all

The above command installs and uses the get-all kubectl plugin that helps to gather and display information from different parts of your system all in one place, making it easier for you to see and manage everything together.

10. Kubectl Debug

We can use kubectl debug for debugging. For example, running this command on the bash:

kubectl debug example-pod

kubectl debug lets us start a live debugging session by opening a shell in a particular pod. This helps us to quickly find and fix problems right inside the pod environment. It’s a very useful tool for debugging, making it easier to spot and solve issues, which ultimately improves the maintenance and reliability of Kubernetes setups.

Must Read

Conclusion

Kubernetes is an important tool in cloud-native app development as it offers a reliable framework for managing containerized apps. We have discussed 10 lesser-known Kubernetes tricks, such as Pod Disruption Budgets and Kubectl Plugins, providing practical solutions for common challenges. Kubernetes automates deployment and scaling tasks, making sure to ease operations across various environments. Whether optimizing resources or using advanced features like Horizontal Pod Autoscaling, these tricks are going to enhance our container orchestration capabilities.
 



Like Article
Suggest improvement
Previous
Next
Share your thoughts in the comments

Similar Reads