Open In App

Load-Sharing Approach in Distributed System

Load sharing basically denotes the process of forwarding a router to share the forwarding of traffic, in case of multiple paths if available in the routing table. In case there are equal paths then the forwarding process will follow the load-sharing algorithm. In load sharing systems, all nodes share the overall workload, and the failure of some nodes increases the pressure of the rest of the nodes. The load sharing approach ensures that no node is kept idle so that each node can share the load.

For example, suppose there are two connections of servers of different bandwidths of 500Mbps and another 250Mbps. Let, there are 2 packets. Instead of sending the 2 packets to the same connection i.e. 500Mbps, 1 packet will be forwarded to the 500Mbps and another to the 250Mbps connection. Here the goal is not to use the same amount of bandwidth in two connections but to share the load so that each connection can sensibly deal with it without any traffic.



Why use Load Sharing?

There are several issues in designing Load Balancing Algorithms. To overcome these issues we use the load-sharing algorithm. The issues are:

Load Sharing algorithm includes policies like location policy, process transfer policy, state information exchange policy, load estimation policy, priority assignment policy, and migration limiting policy.



1. Location Policies: The location policy concludes the sender node or the receiver node of a process that will be moved inside the framework for load sharing. Depending upon the sort of node that steps up and searches globally for a reasonable node for the process, the location strategies are of the accompanying kinds:

2. Process transfer Policy: All or nothing approach is used in this policy. The threshold value of all the nodes is allotted as 1. A node turns into a receiver node if there is no process and on the other side a node becomes a sender node if it has more than 1 process. If the nodes turn idle then they can’t accept a new process immediately and thus it misuses the processing power  To overcome this problem, transfer the process in such a node that is expected to be idle in the future. Sometimes to ignore the processing power on the nodes, the load-sharing algorithm turns the threshold value from 1 to 2. 

3. State Information exchange Policy: In load-sharing calculation, it is not required for the nodes to regularly exchange information, however, have to know the condition of different nodes when it is either underloaded or overloaded. Thus two sub-policies are used here:

4. Load Estimation Policy: Load-sharing algorithms aim to keep away from nodes from being idle yet it is adequate to know whether a node is occupied or idle. Consequently, these algorithms typically utilize the least complex load estimation policy of counting the absolute number of processes on a node.

5. Priority Assignment Policy: It uses some rules to determine the priority of a particular node. The rules are:

6. Migration limiting policy: This policy decides the absolute number of times a process can move. One of the accompanying two strategies might be utilized. 

Article Tags :