Open In App

Passive Queue Management in Router

Last Updated : 09 Mar, 2022
Improve
Improve
Like Article
Like
Save
Share
Report

Congestion occurs when packets’ arriving rate is faster than packets forwarding speed. The router has input and output ports. Thus congestion can happen at any port depending upon the speed of switching fabric.

Congestion at Input Ports

Suppose the router has N input ports and N output ports. If the speed of switching fabric is less than N times the speed of packets arriving at the input port then congestion will happen. Suppose 1 packet is arriving at each input port in one unit of time. There are arriving N packets at input ports in one unit time. The speed of switching fabric is less than N packets per unit of time, so it will not be able to forward these N packets arriving at input ports to the output ports. Say switch fabric processes N/2 packets and the remaining N/2 packets are queued in some of the input ports. If these packets keep coming at input ports then soon the buffer at input ports will fill up and congestion state will exist and packets will get dropped.

Congestion at Input Ports

Congestion at Input Ports

Congestion at Output Ports

Suppose the speed of switching fabric is N packets per unit of time. In the worst case, N packets are arriving at input ports and switching fabric is processing them and forwarding all N packets to the output ports per unit of time. So, the congestion is not going to happen at input ports. There is one thing to note that each input port is not directly linked to an output port. If multiple packets are destined to go to the same output port then congestion will happen at that particular output port. Suppose all N packets are destined to go to output port-1 then in one unit of time, the output port will transmit 1 packet and queue the rest of n-1 packets into the buffer. Again n packets will be received to the same output port-1 then it will again queue n-1 packets and transmit 1 packet. Soon, the buffer will be full and packet loss will start occurring, this is the congestion state at output ports.

Congestion at Output Ports

Congestion at Output Ports

Queue Management Algorithms

Queue management is also known as queue disciplines (qdiscs). It can be classified into two types.

  1. Passive Queue Management (e.g., FIFO)
  2. Active Queue Management (e.g., Random Early Detection)

Passive Queue Management algorithms are reactive in nature i.e., they operate ‘after’ the queue is full. They are easy to deploy. It is difficult to provide queue control with PQMs. Active Queue Management algorithms are proactive in nature i.e., they operate ‘before’ the queue is full. They are easy to moderate difficulty in deployment. They provide good queue control. 

Passive Queue Management

1. Drop tail

It drops the packets from the ‘tail’ of the queue. When the queue gets full and packets are still arriving, the incoming packets get dropped because there is no space in the queue. Since the packets are getting dropped from the tail, it is called Drop Tail. It acts like a simple FIFO queue. The packets which have arrived first are queued and packets coming later are getting dropped.

Drop tail

Drop tail

2. DropHead

It drops the packets from the ‘head’ of the queue. When the queue gets full but packets are still coming, then it makes the space for newly coming packets by dropping the packets from the head of the queue. This is also called Drop Front. DropHead ensures fairness. The long-lasting flows like bit-torrent consume the majority of the bandwidth and other applications become slower. The packets of long-lasting application will keep filling the buffer and short-lasting flows will get dropped by DropTail, but DropHead will make space for them by dropping the packets of long-lasting flows. It does justice with the short-lasting flows.

Drop Head

Drop Head

3. Random Drop

As the name suggests, it drops the packets from a random ‘position’ in the queue. A random number is generated and the packet at that index is dropped. This gives a fair chance to every packet to stay in the queue. Question: What is the advantage of using Random Drop? If we drop the packets from the head, then we are hurting the long-lasting flows. If we drop the packets from the tail, we are hurting the time-sensitive flows. So, to be fair with both types of flows, we randomly select one packet and drop it. Let fate decide which packet will get dropped and which flow will get hurt.

Random Drop

Random Drop

Limitations of PQM Algorithms

PQM suffers from many limitations. It does not control congestion. Packet drop occurs, it can’t control the packet drops. It comes into action when the situation is bad. It is proactive in nature thus suffers from drawbacks. Three major drawbacks are listed below.

1. Global Synchronization

Suppose multiple TCP flows start at different times, say time=0, time=5, and time=15 seconds. All these flows start with a slow start algorithm to handle the congestion. The congestion window of all TCP connections increases by slow start. DropTail causes many packets of all the flows to be dropped at the same time. DropTail treats all flows equally, it does not give priority to any flow. When DropTail drops the packets of each flow, then TCP flows to reduce their congestion window at the same time, this is the first time all flows are synchronized. Subsequently, all TCP flows increase their congestion window at the same time (synchronized). When all flows reduce their cwnd by half at the same time, they are not utilizing the bandwidth fully, so it’s underutilization of the bandwidth. Similarly, when all flows increase their cwnd, overutilization happens. Frequent periods of link ‘overutilization’ and ‘underutilization’. This adds jitter (variation in delay). 

2. Lock Out

DropTail allows a few flows to monopolize the queue space. These flows are typically long-lasting flows (a.k.a ‘elephant’ flows). Short flows (a.k.a mice flows) do not get sufficient space in the queue due to the large occupancy of long-lasting flows. Packets of short flows get dropped. This phenomenon is called ‘Lock Out’. Note that DropHead is the solution to this problem but not 100%. DropHead drops the packets from the head of the queue and allows one packet of short flows to enter at a time. But how much space this DropHead is making? Very less compare to the size of the queue. Still, the majority of the space is held up by the long-lasting flows. Since short flows are not getting space in the queue, their packets will constantly get dropped and this is called a lockout.

3. Bufferbloat

Memory prices have fallen sharply. This problem existed earlier also but never became serious because of the memory size and price. In 2005, the maximum RAM capacity was 256 MB, but today it is 8 GB minimum. Hence, buffering capacity has increased. Excessive buffering leads to ‘high queuing delays. It was reported that queuing delays sometimes rise so much that TCP RTO expires! Time-sensitive applications are the worst affected ones due to bufferbloat.



Like Article
Suggest improvement
Share your thoughts in the comments

Similar Reads