Open In App

Node Affinity in Kubernetes

Last Updated : 17 Mar, 2023
Improve
Improve
Like Article
Like
Save
Share
Report

Pre-requisites: Kubernetes 

Node affinity in Kubernetes refers to the ability to assign a Kubernetes pod to a specific node or group of nodes in a cluster based on specific criteria. A feature called node affinity is employed to guarantee that particular pods are located on particular nodes in a cluster. This facilitates better resource management and performance optimization of the application.

In Kubernetes, a node is a physical or virtual machine that controls one or more pods. Pods are the smallest deployable components in Kubernetes and are used to run containerized applications. With the use of node affinity, specific pods can be scheduled on particular nodes on the basis of a variety of factors, such as the node’s CPU or memory capacity or its location within a particular region or data center.

Types of Node Affinity in Kubernetes: 

Required node affinity and Preferred node affinity. We use the required node affinity to specify which pod needs to be scheduled on which node. The node’s CPU or memory capacity, location in a particular area or data center, or any other special label that the node has been given may all have an impact on this specification.

However, it is not a strict requirement. On the other hand, preferred node affinity is used to suggest that a pod should, whenever possible, be scheduled on a node that matches a specific label. If there are no nodes that match the preferred node, the pod can still be scheduled on different node affinity labels.

Node affinity is a powerful resource that may be used to improve Kubernetes cluster performance and resource usage, but it also has pros and cons. The following are some of the advantages and disadvantages of node affinity:

Advantages:

  1. Better Resource Utilization: Node affinity aids Kubernetes clusters in making better use of their resources by ensuring that pods are scheduled on nodes that have the necessary resources. The performance of the application can be improved in this way.
  2. Increased Control: Node affinity gives Kubernetes administrators more control over the placement of their pods, which can be extremely beneficial for applications that require specific hardware or network resources.
  3. Improved Availability: By ensuring that pods are scheduled on particular nodes or groups of nodes, node affinity can improve the availability of applications in Kubernetes clusters.

Disadvantages:

  1. Complexity: Node affinity can significantly increase the complexity of Kubernetes clusters, especially for administrators who are unfamiliar with the platform. It might be difficult to solve problems or maximize resource use as a result.
  2. Increased Overhead: Node affinity increases Kubernetes clusters’ overhead costs by allowing for more configuration and maintenance than just letting the scheduler distribute pods as it sees fit.
  3. Limited Scalability: Node affinity can also limit the scalability of Kubernetes clusters due to its ability to scale up or down or add new nodes to the cluster. It might be difficult or time-consuming to add new nodes that meet the same criteria.

Node selector vs node affinity

In Kubernetes, the concepts of node selector and node affinity are used to control the scheduling of pods onto the required cluster nodes.

  • Node Selector: To choose which nodes the pods should be scheduled onto, use the Node Selector. To do this, a collection of key-value pairs that match labels on nodes are specified in the pod specification. Only nodes with labels that match the selector will have the pod scheduled onto them. When your cluster just has a few nodes and you want to make sure that particular pods are scheduled onto particular nodes based on their labels, Node Selector can be helpful.
  • Node Affinity: A more quality method of defining how pods should be scheduled onto nodes is Node Affinity. It enables you to specify more complex rules based on node labels, such as requiring that the pod be scheduled onto a node with a particular label or one that meets certain criteria. You may also express anti-affinity via Node Affinity, which makes sure that pods are not scheduled into nodes with specific labels. When your cluster has a lot of nodes and you want more precise control over how pods are scheduled onto them, Node Affinity can be helpful.

So basically, Node Selector provides an easy method for choosing nodes based on labels, whereas Node Affinity offers more advanced capabilities for choosing and avoiding nodes based on complex rules.

Node Affinity Types:

Based on node properties, Node Affinity is used to specify the scheduling preferences for pods. You can further categorize Node Affinity into Three types:

1.RequiredDuringSchedulingRequiredDuringExecution: 

This is a hard rule. The pod must be scheduled on a node that complies with the node affinity criteria  If no nodes in the cluster satisfy the rule, then the pod will remain unscheduled. If the node labels are changed in the future the pod will be evicted.

affinity:
  nodeAffinity:
    requiredDuringSchedulingRequiredDuringExecution:
      nodeSelectorTerms:
      - matchExpressions:
        - key: <label-key>
          operator: In
          values:
          - <label-value>

In this case, the label key-value pair specified in the “nodeSelectorTerms” prevents the pod from being scheduled on nodes that do not have it.

2.RequiredDuringSchedulingIgnoredDuringExecution:

This is the second hard rule. The pod will be scheduled only if the pod labels are matched with the node labels. If the node labels are changed in the future the pod will not be evicted.

affinity:
        nodeAffinity:
         requiredDuringSchedulingIgnoredDuringExecution:
           nodeSelectorTerms:
           - matchExpressions:
             - key: <label-key>
               operator: In
               values:
               - <label-value>

3.PreferredDuringSchedulingIgnoredDuringExecution: 

This is a soft rule. This specifies the primary way for scheduling a node for a pod in accordance with the node affinity rule. The pod will still be scheduled on a node that does not match with the rule if none of the cluster’s nodes do. Below is the example of PreferredDuringSchedulingIgnoredDuringExecution:

affinity:
  nodeAffinity:
    preferredDuringSchedulingIgnoredDuringExecution:
    - weight: 100
      preference:
        matchExpressions:
        - key: <label-key>
          operator: In
          values:
          - <label-value>

In this case, the pod will be scheduled on a node that has the label key-value pair specified in the “match expressions” section. If nodes do not meet this requirement, the pod will still be scheduled on a different node.

Command to see existing labels of the node:

kubectl get nodes --show-labels

Command to set new the labels to node:

kubectl label nodes <node-name> <label-key>=<label-value>

Below is the demonstrated example of NodeAffinity:

You must include the selector rules in the pod’s YAML definition file in order to use the node selector in Kubernetes. For example, the YAML file below defines a pod with a necessary node selector for nodes with the label “nginx” 

Simple node-selector example

 

In the following example, we have used the ‘nodeSelector’ parameter for showing the required node affinity for nodes that do have the label ‘nginx’. This means that the pod will be scheduled on the nodes which contain this label.

Let’s take another example of Node Affinity. For this, we will first create a deployment. The deployment will have the following features:

  • name: Blue
  • image: nginx
  • replicas: 3

In order to create this deployment we can use two different approaches. However, for simplicity, we will use an imperative approach. We can create this deployment using the command:

creating deployment imperative way

 

To check whether or not the deployment was created successfully, execute the command:

Listing the deployments

 

 I have two nodes in my cluster, the control plane, and node01. 

List of nodes

 

For this example, we want to Set Node Affinity to the deployment to place the pods on node01 only.

  1. Name of the deployment: blue
  2. Replicas: 3
  3. Image: nginx
  4. NodeAffinity: requiredDuringSchedulingIgnoredDuringExecution
  5. Key: color
  6. value: blue

In order to achieve this, we will edit the deployment using:

edit deployment

Once we are in we will add the following block of script on the same level of container:

Manifest of node affinity

 

The specs of a Kubernetes object are specified in the “spec” field. It is used for defining the pod’s affinity rules.

The node affinity and all other scheduling preferences for the pod are mentioned in the “affinity” column. The preferences of the pod for the nodes on which it is scheduled are specified in the “node affinity” column.

The nodeSelectorTerms requirements must be met for the pod to be scheduled, according to the “requiredDuringSchedulingIgnoredDuringExecution” parameter. The rules are ignored while the pod is being executed, thus if the node’s labels change while the pod is running, it won’t be rescheduled.

The ‘nodeSelectorTerms’ field is used to define the requirements for a node to satisfy for scheduling. It has mentioned the ‘matchExpressions’ field in its instance.

In this, we have used the ‘matchExpressions’ for specifying a set of label selectors to match with the labels specified on the node. We have used the operator ‘In’ and the corresponding values ‘blue’ are matching against the ‘color’ label key.

Thus, this code directs that the pod be scheduled on nodes that have the label “color” and the value “blue” assigned to them.

Once we have updated the file, We can see that the pods are running on node01. To check the configuration we should run the command:
 

Describe pod

 

As we can see that all the pods are deployed on the node01 node. In Conclusion, your Kubernetes clusters’ performance and resource efficiency may be enhanced by using node affinity. Assuring that your pods are scheduled on nodes with the resources they require through the use of node affinity can help to reduce resource contention and enhance application performance.



Like Article
Suggest improvement
Previous
Next
Share your thoughts in the comments

Similar Reads