A distributed system is a set of computers joined by some sort of communication network, each of which has its database system and users may access data from any spot on the network, necessitating the availability of data at each site. Example- If you want to withdraw money from an ATM, then you can go to any ATM (even ATM of other banks) and swipe your card. The money will be debited from your account and it will be reflected in your account. It doesn’t matter you are taking money from ATM or transferring it to someone by net banking. It means internally all of these things are connected to each other and working as a single unit. Although in real life we see them as distributed.
It is a really important concept in distributed computing which means exactly the same as its name suggests. Let’s take an example of an OTT platform, name it ABC. Now, if there is a weekend today then many people would be sending requests to the server to show the movies or web series of their choice. Behind any large application, there are a lot of servers that deal with client requests and deliver responses. Let say our platform ABC has three servers S1, S2, and S3. Now as a lot of requests are coming, so we need to make sure that the requests should be balanced among these three servers. If all the requests are going to S1 only and S2, S3 are sitting idle. Then it will increase the load on S1 which may lead to a crash of the server and it will also not be good for clients because it will give them delayed response.
Thus we can clearly see that a Load balancer improves the overall performance of a distributed system.
Issues Related to Load Balancing
1. Performance Degradation:
It may lead to performance degradation as load balancers assign equivalent or predetermined weights to diverse resources and therefore it can result in poor performance in terms of speed and cost. Therefore, it is the need to have effective load balancers which balance load depending upon the type of resources.
2. Job Selection:
It deals with the issue of job selection. Whenever we are assigning some jobs to resources through load balancers. There should be an optimal algorithm to decide the order and which jobs should be given to which servers for our system to work efficiently.
3. Load Level Comparison:
Load distribution should be done based on the basis of load level comparison of different servers. Thus a whole system needs to be set up for collecting and maintaining the server’s status data.
4. Load Estimation:
There is no way to determine or predict the load or the total number of processes on a node since the demand for process resources fluctuates quickly.
5. Performance Indices:
The performance indices of the system should not degrade anything more than a particular point. Load balancers should provide stability. So they need to make sure that during extreme events like- when the number of requests from the server increases drastically.
6. Availability and Scalability:
A distributed system should be easily available and scalable. Nowadays the concept of distributed systems is used all over the globe. It provides customers a lot of flexibility to view services on demand. Therefore an effective load balancer must account for transformation as per expectations of processing power and scalability.
In a normal load balancer, there is a central node that is in charge of load balancing choices. As one load is given all the power, then it leads to the condition of a single point of failure. If the central node fails, it will badly impact the application. Therefore its the need to have some distributed algorithms to make sure that there we don’t rely on a central node for all our tasks.
It also deals with some security issues. It is vulnerable to attacks. This issue can be minimized to a larger extent by using cloud load balancing. They are less prone to attacks.
9. Amount of Information Exchanged among Nodes:
As we are aware that fixing network problems is quite difficult. And introducing a load balancer to the picture adds to the difficulty. It might be hard to tell whether the load balancer is merely discarding packets, altering packets, or increasing delay.
The algorithm of the load balancer should be simpler, it should not be sophisticated or of high time complexity. The more the system gets complicated the more latency increases thus increasing the response time of the server. Therefore, distributed system’s overall productivity will be harmed by a sophisticated algorithm.
11. Homogeneous nodes:
The requirement expectations from a system change from time to time. Therefore we can’t go for homogeneous nodes i.e the nodes which are made to do a certain type of task. As a result, developing efficient load-balancing solutions for diverse environments/nodes is a difficult task.
Unlock the Power of Placement Preparation!
Feeling lost in OS, DBMS, CN, SQL, and DSA chaos? Our Complete Interview Preparation
Course is the ultimate guide to conquer placements. Trusted by over 100,000+ geeks, this course is your roadmap to interview triumph.
Ready to dive in? Explore our Free Demo Content and join our Complete Interview Preparation