What are Load Balancers?
In case multiple servers are present the incoming request coming to the system needs to be directed to one of the multiple servers. We should ensure that every server gets an equal amount of requests. The requests must be distributed in a uniform manner across all the servers. The component which is responsible for distributing these incoming requests uniformly across the servers is known as Load Balancer. A Load Balancer acts as a layer between the incoming requests coming from the user and multiple servers present in the system.
We should avoid the scenarios where a single server is getting most of the requests while the rest of them are sitting idle. There are various Load Balancing Algorithms that ensure even distribution of requests across the servers.
Hashing Approach to direct requests from the Load Balancer
We will be discussing the Hashing Approach to direct the requests to multiple servers uniformly.
Suppose we have server_count as the Total number of servers present in the System and a load_balancer to distribute the requests among those servers. A request with an id request_id enters the system. Before reaching the destination server it is directed to the load_balancer from where it is further directed to its destination server.
When the request reaches the load balancer the hashing approach will provide us with the destination server where the request is to be directed.
Discussing the Approach :
- request_id : Request ID coming to get served
- hash_func : Evenly distributed Hash Function
- hashed_id : Hashed Request ID
- server_count : Number of Servers
Computing the Destination Server address :
If the value of server_count is 10 i.e we have ten servers with following server ids server_id_0, server_id_1, ………, server_id_9.
Suppose the value of request_id is 23
When this request reaches the Load Balancer the hash function hash_func hashes the value of the incoming request id.
- hash_func(request_d) = hash_func(23)
Suppose after Hashing the request_id is randomly hashed to a particular value.
- hashed_id = 112
In order to bring the hashed id in the range of the number of servers we can perform a modulo of the hashed id with the count of servers.
- dest_server = hashed_id % server_count
- dest_server = 112%10
- dest_server = 2
So we can route this request to Server server_id_2
In this way we can distribute all the requests coming to our Load Balancer evenly to all the servers. But is it an optimal approach? Yes it distributes the requests evenly but what if we need to increase the number of our servers. Increasing the server will change the destination servers of all the incoming requests. What if we were storing the cache related to that request in its destination server? Now as that request is no longer routed to the earlier server, our entire cache can go in trash probably. Think !
Attention reader! Don’t stop learning now. Get hold of all the important CS Theory concepts for SDE interviews with the CS Theory Course at a student-friendly price and become industry ready.
- Elastic Load Balancer in AWS
- Polynomial Regression ( From Scratch using Python )
- Design Twitter - A System Design Interview Question
- Down-sampling in MATLAB
- Up-sampling in MATLAB
- How to tag an image and push that image to Dockerhub ?
- Working with AWS CodeCommit
- RPA Life Cycle
- What is AWS Bastion Host?
- What is Amazon Alexa?
- Working with SP Flash Tool
- Cross Domain Referrer Header Leakage
- What is Mulisignature Wallets?
- Why Blockchain is Impenetrable?
If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to firstname.lastname@example.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.
Please Improve this article if you find anything incorrect by clicking on the "Improve Article" button below.