Congestion Control in Datagram Subnets
In this article, we will discuss different approaches to Congestion control in the data-gram subnet. Also, we will discuss the drawback and will explain each approach in detail. Let’s discuss it one by one.
Pre-requisite –Congestion control
Congestion control in data-gram and sub-nets :
Some congestion Control approaches which can be used in the datagram Subnet (and also in virtual circuit subnets) are given under.
- Choke packets
- Load shedding
- Jitter control.
Approach-1: Choke Packets :
- This approach can be used in virtual circuits as well as in the data gram sub-nets. In this technique each router associates a real variable with each of its output line.
- This real variable say u has a value between 0 and 1, and it indicates the percentage utilization of that line.If the value of the variable goes above the threshold then the output line will enter into a warning state.
- The router will check each of newly arriving packet to see if its output line is in the warning state.if it is in the warning state then router will send back a choke packets. Several variations on the congestion control algorithm have been proposed depending on the value of thresholds.
- Depending upon the threshold value, the choke packets can contain a mild warning a stern warning, or an ultimatum. Another variation can be in terms of queue lengths or buffer utilization instead of using the line utilization as a deciding factor
The problem with choke packet technique is that the action to be taken by the source host on receiving a choke packet is voluntary and not compulsory.
Approach-2: Load Shedding :
- Admission control, choke packets, fair queuing are the techniques suitable for congestion control. But if these techniques cannot make the congestion to disappear, then load shedding technique is to be used.
- The principle of load shedding states that when the router is being inundated by the packets that they cannot handle, they should simply through packets away.
- A router flooding with packets due to congestion can drop any packet at random. But there are better ways of doing this.
- The policy for dropping a packet depends on the type of packet. For file transfer, an old packet is more important than a new packet In contrast, for multimedia a new packet is more important than an old one So.the policy for file transfer called wine (old is better than new) and that for the multimedia is called milk (new is better than old).
- An intelligent discard policy can be decided depending on the applications. To implement such an intelligent discard policy, co-operation from the sender is essential.
- The application should mark their packets in priority classes to indicate how important they are.
- If this is done then when the packets are to be discarded the routers can first drop packets from the lowest class (i.e. the packets which are least important). Then the routers will discard the packets from next lower class and so on. One or more header bits are required to put the priority tor making the class of a packet. In every ATM cell, 1 bit is reserved in the header for marking the priority. Every ATM cell is labeled either as a low priority or high priority.
Approach-3: Jitter control :
- Jitter may be defined as the variation in delay for the packet belonging to the same flow. The real time audio and video cannot tolerate jitter on the other hand the jitter doesn’t matter if the packets are carrying an information contained in a file.
- For the audio and video transmission if the packets take 20 ms to 30 ms delay to each the destination, it doesn’t matter, provided that the delay remains constant.
- The quality of sound and visuals will be hampered of the delays associated with different packets have different values. Therefore, practically we can say that 99% of packets should be delivered with a delay ranging from 24.5 ms to 25.5 ms.
- When а packet arrives at a router, the router will check to see whether the packet is behind or ahead and by what time.
- This information is stored in the packet and updated at every hop. If packet is ahead of the schedule then router will hold it for a slight longer time and if the packet is behind schedule, then the router will try to send it out as quickly as possible. This will help in keeping the average delay per packet constant and will avoid time jitter.
Attention reader! Don’t stop learning now. Get hold of all the important CS Theory concepts for SDE interviews with the CS Theory Course at a student-friendly price and become industry ready.