Open In App

What Is Container Network Interface (CNI) ?

Last Updated : 08 Apr, 2024
Improve
Improve
Like Article
Like
Save
Share
Report

Controlling networks within Kubernetes clusters is mostly dependent on the Container Network Interface (CNI). CNI is an important component of the Kubernetes environment that allows easy networking and communication between containers and other networks. Let’s briefly discuss the Container Network Interface (CNI).

What Is The Container Network Interface (CNI)?

The Container Network Interface (CNI) is a framework for dynamically configuring network resources. It makes use of Go-written libraries and specifications. The plugin standard specifies an interface for configuring the network, assigning IP addresses, and maintaining connectivity to many hosts.

When used with Kubernetes, CNI connects smoothly with the Kubelet, allowing for the use of an over or underlay network to automatically configure the network between pods. These networks encapsulate network activity behind a virtual interface, such as Virtual Extensible LAN (VXLAN). Serves as networks are physical networks made up of switches and routers.

Once you’ve defined the network configuration type, the container runtime determines which network the containers join. The runtime adds the interface to the container namespace using the CNI plugin and distributes the linked subnetwork routes using the IP Address Management (IPAM) plugin.

CNI supports Kubernetes networking and is compatible with other Kubernetes-based container management solutions, including OpenShift. CNI uses software-defined networking (SDN) to unify container communication between clusters.

CNI Architecture

A simple plugin-based architecture drives CNI. The CNI plugins are called by the container runtime (like Docker) for setting up the network environment when a pod is created in Kubernetes. The plugins can be created using several different programming languages and use standard input and output to communicate with the container runtime. To set up networking for containers, they make use of the Linux networking stack.

CNI Architecture

Why Is Kubernetes CNI Used?

The technologies around Linux-based containers and container networking are always changing to support applications that operate in a variety of situations. The Cloud-Native Computing Foundation (CNCF) launched CNI, a project that describes how Linux container network interfaces should be set up.

To allow networking solutions to be connected with various container management systems and runtimes, CNI was developed. It specifies a common interface standard for both the networking and container processing levels, as compared to connecting in networking solutions.

CNI deals with the connectivity of container networks and the release of the resources allocated when containers are terminated. Because of this focus, CNI specifications are easy to understand and may be frequently used. Additional information regarding the CNI performance, including the third-party modules and runtimes that utilize it, can be found in the CNI GitHub project.

How To Implement CNI?

Let’s look at an example of a Kubernetes cluster running multiple pods to get a better understanding of CNI. Suppose for the moment that we want to assist with how two pods, A and B, connect.

Network Setup Required by Container Runtime: After creating pod A, the container runtime starts the configured CNI plugin to set up networking for pod A. The CNI plugin provides the pod’s container an IP address after understanding about its network requirements.

Network Environment Installation by CNI Plugin: The CNI plugin sets up a network interface with the supplied IP address in the pod A container. It also set up the network policies and routing rules that are required.

Pod B interaction: In the same way, after the creation of pod B, the container runtime calls the CNI plugin, and then allocates an IP address to the container within pod B and establishes the necessary network environment.

Network Being connected: Pods A and B can communicate using their respective IP addresses thanks to the network interfaces and IP addresses that the CNI plugins assigned. Depending on how the network is set up, this communication may take place with external networks or within the cluster.

CNI Plugins: To meet various networking needs, a large selection of CNI plugins is available. Weave, Canal, Flannel, and Calico are a few well-known examples. Features like load balancing, security policies, network isolation, and integration with other network resources are provided by these plugins.

CNI in Action: To apply CNI with the Calico plugin in a Kubernetes cluster, for instance, you must:

  • Install Calico Plugin: Installing the Calico CNI plugin in your Kubernetes cluster is the first step. Using package managers such as Helm or applying the proper declaration files will do this.
  • Set Up Calico Networking: After installation, configure Calico to fit your networking needs. This include setting up IP pools, network policies, and other security setups that are required.
  • Make Pods: At this point, make pods inside your cluster. The pods will automatically establish connectivity, assign IP addresses, and set up network interfaces thanks to the pairing of the CNI plugin with Calico.
  • Verify Connectivity: By using the given IP addresses to communicate between the pods, you may confirm that the network is connected. If set, you can also test connectivity to external networks.

Pod Networking

The basic concepts of Kubernetes pod networking, which is based on the Kubernetes network model, are as follows:

  • Every pod has an IP address that is unique for the complete cluster.
  • Without NAT, pods are able to communicate with each other between nodes.
  • Every Pod on a node has communication access for agents.

CNI Based on Network Models

Network models that are both encapsulated or unencapsulated can be used to implement CNI networks. A model that is encapsulated is called XLAN, but an unencapsulated model is called Border Gateway Protocol (BGP).

Encapsulated Networks

This concept supports many Kubernetes nodes and encapsulates a logical Layer 2 network using an existing Layer 3 network topology. Because Layer 2 networks are separated routing distribution is not required. Larger IP packages and better processing are provided at an affordable cost as the IP header produced by the overlay encapsulation contains the IP package.

In Kubernetes, UDP ports translate information from the network control plane to the MAC addresses and distribute encapsulated data between workers. Common models of encapsulation networks include Internet Protocol Security (IPsec) and VXLAN.

In order to put it simply, this model acts as a bridge between pods and Kubernetes workers. Docker, or an other container engine, is the component inside pods that controls communication. Because it is at risk for Kubernetes worker delays in Layer 3, it is used in applications where a Layer 2 bridge is recommended. Reducing the delay times between data centers in different geographical regions is essential to avoid network division.

Unencapsulated Networks

A Layer 3 network is provided by this model to route packets between containers. No separate Layer 2 network or overhead exists, but Kubernetes workers pay the cost of managing any necessary route distribution. To connect the Kubernetes workers, a network protocol is used, and BGP is used to provide pods with routing information. Docker or another container engine is the part of the pod that handles communication with workloads.

In this concept, a network entry point that informs users how to get to the pods is extended within Kubernetes workers. Use cases that require a routed Layer 3 network work better for unencapsulated networks. At the operating system level, routes for Kubernetes workers are dynamically changed to minimize time.

Conclusion

In addition, CNI provides an adaptive and flexible approach for handling networking requirements. Additionally, the plugins help in managing tasks like creating network routes for containers and assigning IP addresses. However, you must consider certain requirements and guidelines in order to work with the container runtime and establish a smooth connection with outside networks.

Container Network Interface (CNI) – FAQ’s

Why CNI is required in Kubernetes?

The Container Network Interface (CNI) is an important feature of Kubernetes as it allows containers inside a cluster to communicate smoothly. CNI’s plugin-based architecture provides a flexible and adaptable approach to addressing networking requirements.

Can Kubernetes work without CNI?

To implement the Kubernetes network model, you’ll need a CNI plugin. You must use a CNI plugin that is compatible with the CNI specification version 0.4.0 or later. The Kubernetes project advises using a plugin that is compatible with version 1.

Which CNI should I use?

Flannel and Weavenet provide excellent setup and configuration options. Calico has higher performance since it uses a base network using BGP. Cilium uses a completely different application-layer filtering architecture, BPF, and is primarily focused on enterprise security.

What is Container Interface?

The Container Runtime Interface (CRI) is the primary protocol for communicating between the Kubelet and the Container Runtime. The Kubernetes Container Runtime Interface (CRI) specifies the primary gRPC interface for communicating between the node components kubelet and container runtime.

What is the difference between CNI and CNM?

CNI assumes that the network configuration is in JSON format, which can be saved in a file. While CNM, CNI does not require a distributed key-value store, such as etcd or consul. The CNI plugin is supposed to assign an IP address to the container network interface.



Like Article
Suggest improvement
Share your thoughts in the comments

Similar Reads