Open In App

Load-Sharing Approach in Distributed System

Improve
Improve
Like Article
Like
Save
Share
Report

Load sharing basically denotes the process of forwarding a router to share the forwarding of traffic, in case of multiple paths if available in the routing table. In case there are equal paths then the forwarding process will follow the load-sharing algorithm. In load sharing systems, all nodes share the overall workload, and the failure of some nodes increases the pressure of the rest of the nodes. The load sharing approach ensures that no node is kept idle so that each node can share the load.

For example, suppose there are two connections of servers of different bandwidths of 500Mbps and another 250Mbps. Let, there are 2 packets. Instead of sending the 2 packets to the same connection i.e. 500Mbps, 1 packet will be forwarded to the 500Mbps and another to the 250Mbps connection. Here the goal is not to use the same amount of bandwidth in two connections but to share the load so that each connection can sensibly deal with it without any traffic.

Why use Load Sharing?

There are several issues in designing Load Balancing Algorithms. To overcome these issues we use the load-sharing algorithm. The issues are:

  • Load assessment: It decides how to evaluate the workload of a node in a distributed framework.
  • Process transfer: It concludes whether the process can be executed locally or from a distance.
  • Static information exchange:  It decides how the framework loads information that can be exchanged among the nodes.
  • Location policy: It decides the determination of an objective hub during process migration.
  • Priority assignment: It decides the priority of execution of a bunch of nearby and remote processes on a specific node.
  • Migration restricting policy: It decides the absolute number of times a process can move starting with one hub then onto the next.

Load Sharing algorithm includes policies like location policy, process transfer policy, state information exchange policy, load estimation policy, priority assignment policy, and migration limiting policy.

1. Location Policies: The location policy concludes the sender node or the receiver node of a process that will be moved inside the framework for load sharing. Depending upon the sort of node that steps up and searches globally for a reasonable node for the process, the location strategies are of the accompanying kinds:

  • Sender-inaugurated policy: Here the sender node of the process has the priority to choose where the process has to be sent. The actively loaded nodes search for lightly loaded nodes where the workload has to be transferred to balance the pressure of traffic. Whenever a node’s load turns out to be more than the threshold esteem, it either communicates a message or arbitrarily tests different nodes individually to observe a lightly loaded node that can acknowledge at least one of its processes. In the event that a reasonable receiver node isn’t found, the node on which the process began should execute that process.
  • Receiver-inaugurated policy: Here the receiver node of the process has the priority to choose where to receive the process. In this policy, lightly loaded nodes search for actively loaded nodes from which the execution of the process can be accepted. Whenever the load on a node falls under threshold esteem, it communicates a text message to all nodes or tests nodes individually to search for the actively loaded nodes. Some vigorously loaded node might move one of its processes if such a transfer does not reduce its load underneath the normal threshold.

2. Process transfer Policy: All or nothing approach is used in this policy. The threshold value of all the nodes is allotted as 1. A node turns into a receiver node if there is no process and on the other side a node becomes a sender node if it has more than 1 process. If the nodes turn idle then they can’t accept a new process immediately and thus it misuses the processing power  To overcome this problem, transfer the process in such a node that is expected to be idle in the future. Sometimes to ignore the processing power on the nodes, the load-sharing algorithm turns the threshold value from 1 to 2. 

3. State Information exchange Policy: In load-sharing calculation, it is not required for the nodes to regularly exchange information, however, have to know the condition of different nodes when it is either underloaded or overloaded. Thus two sub-policies are used here:

  • Broadcast when the state changes: The nodes will broadcast the state information request only when there is a change in state. In the sender-inaugurated location policy, the state information request is only broadcasted by the node when a node is overloaded. In the receiver-inaugurated location policy, the state information request is only broadcasted by the node when a node is underloaded.
  • Poll when the state changes: In a large network the polling operation is performed. It arbitrarily asks different nodes for state information till it gets an appropriate one or it reaches the test limit.

4. Load Estimation Policy: Load-sharing algorithms aim to keep away from nodes from being idle yet it is adequate to know whether a node is occupied or idle. Consequently, these algorithms typically utilize the least complex load estimation policy of counting the absolute number of processes on a node.

5. Priority Assignment Policy: It uses some rules to determine the priority of a particular node. The rules are:

  • Selfish: Higher priority is provided to the local process than the remote process. Thus, it has the worst response time performance for the remote process and the best response time performance for the local process.
  • Altruistic: Higher priority is provided to the remote process than the local process. It has the best response time performance.
  • Intermediate: The number of local and remote processes on a node decides the priority. At the point when the quantity of local processes is more or equivalent to the number of remote processes then local processes are given higher priority otherwise remote processes are given higher priority than local processes.

6. Migration limiting policy: This policy decides the absolute number of times a process can move. One of the accompanying two strategies might be utilized. 

  • Uncontrolled: On arrival of a remote process at a node is handled similarly as a process emerging at a node because of which any number of times a process can migrate.
  • Controlled: A migration count parameter is used to fix the limit of the migration of a process. Thus,  a process can migrate a fixed number of times here. This removes the instability of uncontrolled strategy.

Last Updated : 04 Mar, 2022
Like Article
Save Article
Previous
Next
Share your thoughts in the comments
Similar Reads