Routing requests through Load Balancer
What are Load Balancers?
In case multiple servers are present the incoming request coming to the system needs to be directed to one of the multiple servers. We should ensure that every server gets an equal amount of requests. The requests must be distributed in a uniform manner across all the servers. The component which is responsible for distributing these incoming requests uniformly across the servers is known as Load Balancer. A Load Balancer acts as a layer between the incoming requests coming from the user and multiple servers present in the system. We should avoid the scenarios where a single server is getting most of the requests while the rest of them are sitting idle. There are various Load Balancing Algorithms that ensure even distribution of requests across the servers.
Hashing Approach to direct requests from the Load Balancer
We will be discussing the Hashing Approach to direct the requests to multiple servers uniformly. Suppose we have server_count as the Total number of servers present in the System and a load_balancer to distribute the requests among those servers. A request with an id request_id enters the system. Before reaching the destination server it is directed to the load_balancer from where it is further directed to its destination server. When the request reaches the load balancer the hashing approach will provide us with the destination server where the request is to be directed.
Discussing the Approach :
- request_id : Request ID coming to get served
- hash_func : Evenly distributed Hash Function
- hashed_id : Hashed Request ID
- server_count : Number of Servers
java
class GFG { public static int hash_func( int request_id) { // Computing the hash request id int hashed_id = 112 ; return hashed_id; } public static void route_request_to_server( int dest_server) { System.out.println("Routing request to the Server ID : " + dest_server); } public static int request_id = 23 ; // Incoming Request ID public static int server_count = 10 ; // Total Number of Servers public static void main(String args[]) { int hashed_id = hash_func(request_id); // Hashing the incoming request id int dest_server = hashed_id % server_count; // Computing the destination server id route_request_to_server(dest_server); } } |
Computing the Destination Server address :
If the value of server_count is 10 i.e we have ten servers with following server ids server_id_0, server_id_1, ………, server_id_9. Suppose the value of request_id is 23 When this request reaches the Load Balancer the hash function hash_func hashes the value of the incoming request id.
- hash_func(request_d) = hash_func(23)
Suppose after Hashing the request_id is randomly hashed to a particular value.
- hashed_id = 112
In order to bring the hashed id in the range of the number of servers we can perform a modulo of the hashed id with the count of servers.
- dest_server = hashed_id % server_count
- dest_server = 112%10
- dest_server = 2
So we can route this request to Server server_id_2 In this way we can distribute all the requests coming to our Load Balancer evenly to all the servers. But is it an optimal approach? Yes it distributes the requests evenly but what if we need to increase the number of our servers. Increasing the server will change the destination servers of all the incoming requests. What if we were storing the cache related to that request in its destination server? Now as that request is no longer routed to the earlier server, our entire cache can go in trash probably. Think !
Load balancing is a technique used to distribute incoming requests evenly across multiple servers in a network, with the aim of improving the performance, capacity, and reliability of the system. Load balancers act as a reverse proxy, routing incoming requests to different servers based on various algorithms and criteria.
Here’s how routing requests through a load balancer works:
Incoming requests are received by the load balancer, which acts as a single point of entry for all incoming traffic.
The load balancer uses an algorithm to determine which server should handle the request, based on factors such as the server’s current load, response time, and availability.
The load balancer forwards the request to the selected server.
The server processes the request and returns the response to the load balancer.
The load balancer returns the response to the client.
By routing requests through a load balancer, the system can improve its performance and capacity, as the load balancer ensures that incoming requests are distributed evenly across all available servers. This helps to avoid overloading any individual server and ensures that the system can continue to handle incoming requests even if one or more servers fail.
Reference books:
Load Balancing with HAProxy: Open-source technology for better scalability, redundancy and availability in your IT infrastructure by Nick Ramirez
Load Balancing with HAProxy: How to Balance Load for Web Applications and Improve Website Performance by Rajan Kumar
Load Balancer Fundamentals: An Introduction to Load Balancing and Load Balancer Types by Michael J. Williams.
Please Login to comment...