Open In App

Deadlock-Free Packet Switching

Improve
Improve
Like Article
Like
Save
Share
Report

In computer networks deadlock is the most serious system failure. Deadlocks are situations in which packets are stuck in a loop and can never reach their destination no matter what sequence of moves are performed. Deadlock must be observed and carefully handled to avoid system failure. 

Most of the networks that we have today are packet switching networks where the message are divided into packets and the packets are routed from source to destination.

Store-and-forward deadlocks:

This is the widely discussed deadlock in which the intermediate node may receive packets from different destinations and they have to store those packets on their local buffer and forward them to the next node according to their routing table, as the local buffer is of finite-size there occur a deadlock.

Example: a,b,c d  are the four nodes, all of the nodes have buffer size 4. Suppose, node ‘a’ sending packets to ‘d’ via ‘b’ and node ‘d’ is also sending packets to ‘a’ via ‘c’

Solution: Now, to reach node ‘d’ all the packets from ‘b’ must be transferred to ‘c’, and similarly to reach ‘a’ all the nodes from ‘c’ must reach ‘b’ but none of the nodes has an empty buffer.

Store and Forward Deadlock

Store and Forward Deadlock

Assumptions for Solving Store and Forward Deadlock:

Some assumptions for solving the issue are as follows:

Model: The network is a graph G =(V, E), V is the set of processors and E is the set of Communication links. Each node has B buffers, and k, the length of the longest route taken by a packet in G

Moves: We see distributed computation as a set of moves i.e.; something happens in the environment and the node reacts to those.

  1. Generation: A node that either creates or receives a new packet ‘p’. This node is known to be a source of the packet.
  2. Forwarding: The packet is forwarded to the new node having an empty buffer on its route.
  3. Consumption: When the packet reached the destination it is removed from the buffer.

Requirement for packet Switching:

  1. A controller algorithm that permits various moves in a network.
  2. The consumption of a packet (at its destination) is always allowed.
  3. The generation of a packet in a node where all buffers are empty is always allowed.
  4. The controller uses only local information
  5. A controller is said to be deadlock-free if it protects the network from deadlock

Solution for Store and Forward Deadlock:

To resolve this deadlock we have two solutions:

  1. Structured Buffer Pools
  2. Unstructured Buffer Pools

Structured Buffer Pools: 

Methods using structured buffer pools will identify for a node and a packet a specific buffer that must be taken if a packet is generated or received. If this buffer is occupied, the packet cannot be accepted.

1. Buffer Graph(BG): It is a structured solution to resolve the deadlock, It is a virtual directed graph defined over the buffer in the network such that,

  1. Buffer Graph(BG) has no directed cycle i.e.; it is acyclic.
  2. For every Routing path, there must be a path in the buffer graph

[NOTE: The path through which the packet move is determined by the routing algorithm, and buffer management strategy is to set on which buffer the packet would be stored in the next node.]

2. Buffer Graph Controller:

  • The generation of the packet is only allowed if there is an empty buffer in the node.
  • Forwarding of a packet is only allowed if there is an empty buffer in the next node.

By this set of rules, the deadlock is prevented and packet loss is protected.

Disadvantage: Limited use of the storage buffer.

Unstructured Buffer Pools: 

In methods using unstructured buffer pools, all buffers are equal; the method only prescribes whether or not a packet can be accepted, but does not determine in which buffer it must be placed.

1. Forward-count Controller(FC): It is an unstructured solution, For a packet p, let sp be the number of hops to reach the destination and fu be the number of buffers in the node then the controller accepts the packet p if sp<fu.
If B denotes the maximum number of free buffer in the node and k denotes the maximum number of hops to reach the destination then B>K always ensures a deadlock-free state. 
From the above, we can say that the Forward count controller is a deadlock-free controller.

2. Forward State Controller(FS): It takes more information about the packet on the receiving end. As in the case if the number of empty buffer b is fewer than the path of the packet to reach the destination then we Forward State Controller as a deadlock-free controller.

3. Backward-count Controller(BC): It is a variant of forwarding count controller, a packet that has made some moves from its source to a destination node it should be accepted by Destination only if has at most some nonempty buffers i.e.; for a packet p, let tp be the number of hops it has made from it source then controller accepts packet p in a node if tp>k-fu. 
By applying this in a network deadlock can be prevented therefore backward- count controller is said to be the deadlock-free controller.

4. Backward-State Controller(BS): Similar to forward-state controllers, in backward state controllers we can use more information about packets in the receiving node. 

Relations Between FC, BC, FS, BS:

  1. FC ⊂ FS, i.e.
  2. BC ⊂ BS
  3. BC ⊂ FC
  4. BS ⊂ FS
Lattice diagram showing relationship between FC,BC,FS,BS

Lattice diagram showing relationship between FC,BC,FS,BS

Some other deadlocks are:

1. Progeny deadlock may arise when a packet p in the network can create another packet q, which violates the assumption that the network always allows forwarding and consumption of a packet. Progeny deadlock can be avoided by having multiple levels of the buffer graph

2. Copy-release deadlock may arise when the source holds a copy of the packet until an (end-to-end) acknowledgment for the packet is received from the destination. Two extensions of the buffer-graph principle are given by which copy release deadlock can be avoided.

3. Pacing deadlock may arise when the network contains nodes, with limited internal storage, that may refuse to consume messages until some other messages have been generated. Pacing deadlock can be avoided by making a distinction between peaceable packets and pacing responses.

4. Reassembly deadlock may arise in networks where large messages are divided into smaller packets for transmission and no packet can be removed from the network until all packets of the message have reached the destination. Reassembly deadlocks can be avoided by using separate groups of buffers for packet forwarding and reassembly.



Last Updated : 22 Feb, 2022
Like Article
Save Article
Previous
Next
Share your thoughts in the comments
Similar Reads