Packet Queuing and Dropping in Routers

Routers are essential networking devices that direct the flow of data over a network. Routers have one or more input and output interfaces which receive and transmit packets respectively. Since the router’s memory is finite, a router can run out of space to accommodate freshly arriving packets. This occurs if the rate of arrival of the packets is greater than the rate at which packets exit from the router’s memory. In such a situation, new packets are ignored or older packets are dropped. As part of the resource allocation mechanisms, routers must implement some queuing discipline that governs how packets are buffered or dropped when required.

Routers queuing

Fig 1: Depiction of a router’s inbound and outbound traffic

Queue Congestion and Queuing Disciplines

Router queues are susceptible to congestion by virtue of the limited buffer memory available to them. When the rate of ingress traffic becomes larger than the amounts that can be forwarded on the output interface, congestion is observed. The potential causes of such a situation mainly involve:

  • Speed of incoming traffic surpasses the rate of outgoing traffic
  • The combined traffic from all the input interfaces exceeds overall output capacity
  • The router processor is incapable of handling the size of the forwarding table to determine routing paths

To manage the allocation of router memory to the packets in such situations of congestion, different disciplines might be followed by the routers to determine which packets to keep and which packets to drop. Accordingly, we have the following important queuing disciplines in routers:

First-In, First-Out Queuing (FIFO)

The default queuing scheme followed by most routers is FIFO. This generally requires little no configuration to be done on the server. All packets in FIFO are serviced in the same order as they arrive in the router. On reaching saturation within the memory, new packets attempting to enter the router are dropped (tail drop). Such a scheme, however, is not apt for real-time applications, especially during congestion. A real-time application such as VoIP, which continually sends packets, may be starved during times of congestion and have all its packets dropped.

Priority Queuing (PQ)

In Priority Queuing, instead of using a single queue, the router bifurcates the memory into multiple queues, based on some measure of priority. After this, each queue is handled in a FIFO manner while cycling through the queues one by one. The queues are marked as High, Medium, or Low based on priority. Packets from the High queue are always processed before packets from the Medium queue. Likewise, packets from the Medium queue are always processed before packets in the Normal queue, etc. As long as some packets exist in the High priority queue, no other queue’s packets are processed. Thus, high priority packets cut to the front of the line and get serviced first. Once a higher priority queue is emptied, only then is a lower priority queue serviced.



Priority Queuing Sub-queues

Fig 2: Multiple sub-queues used in Priority Queuing Scheme

The obvious advantage of PQ is that higher-priority traffic is always processed first. However, a significant disadvantage to the PQ scheme is that the lower-priority queues can often receive
no service at all as a result of starvation. A constant stream of High priority traffic can starve out the lower-priority queues

Weighted Fair Queuing (WFQ)

Weighted Fair Queuing (WFQ) dynamically creates queues based on traffic flows and assigns bandwidth to these flows based on priority. The sub-queues are assigned bandwidths dynamically. Suppose 3 queues exist which have bandwidth percentages of 20%, 30%, and 50% when they are all active. Then, if the 20% queue is idle, the freed-up bandwidth is allocated among the remaining queues, while preserving the original bandwidth ratios. Thus, the 20% queue is now allotted (75/2)% and the 50% queue is now allotted (125/2)% bandwidth.

Traffic flows are distinguished and identified based on various header fields in the packets, such as:

  • Source and Destination IP address
  • Source and Destination TCP (or UDP) port
  • IP Protocol number
  • Type of Service value (IP Precedence or DSCP)
Weighted Fair Queueing

Fig 3: Dynamically allocated bandwidths for sub-queues in WFQ

Thus, packets are separated into distinct queues based on the traffic flow that corresponds to them. Once identified, packets belonging to the same traffic flow are inserted into a queue, created specifically for such traffic. By default, a maximum of 256 queues can be established within the router, however, this number may be cranked up to 4096 queues. Unlike PQ schemes, the WFQ-queues are allotted differing bandwidths based on their queue priorities. Packets with a higher priority are scheduled before lower-priority packets arriving at the same time.

Effect of Queuing Disciplines on Network

The choice of queuing discipline impacts the performance of the network in terms of the number of dropped packets, latency, etc. When analyzing the effect of choosing the different schemes, we observe significant impacts on various parameters.

Packet drop analysis

Fig 4: Number of packets dropped versus time for different queuing disciplines (Simulation run on Riverbed Modeler)

Measuring the overall packet drop in the network for the three schemes points to the following results:

  • In all the mechanisms, there are no packet drops in the beginning, up to a certain point. This is because it takes a finite time for router buffer memory to be filled up. Since packet drops occur only after the buffer is full, thus there is an initial period when there are no packet drops as the buffer capacity has not yet been reached.
  • In FIFO scheme, the packet drop starts after PQ but before WFQ. More prominently, the number of packets being dropped is the greatest in the case of FIFO. This is by virtue of the fact that once congested, all incoming traffic from all the apps is dropped altogether without any discrimination.
  • In PQ scheme, the packet drops start the earliest. Since PQ divides the queue based on priority levels, the overall size of the individual queues is divided up. Assuming a simple division of the memory into an “Important” Queue and a “Less Important” Queue, the queue size is halved. Thus, the packets being directed to the sub-queues will cause the queue to be filled up earlier (because of the smaller capacity) and hence packet drop will start earlier

Don’t stop now and take your learning to the next level. Learn all the important concepts of Data Structures and Algorithms with the help of the most trusted course: DSA Self Paced. Become industry ready at a student-friendly price.

My Personal Notes arrow_drop_up

Check out this Author's contributed articles.

If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to contribute@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.

Please Improve this article if you find anything incorrect by clicking on the "Improve Article" button below.