Open In App

Routing requests through Load Balancer

Improve
Improve
Like Article
Like
Save
Share
Report

What are Load Balancers?

In case multiple servers are present the incoming request coming to the system needs to be directed to one of the multiple servers. We should ensure that every server gets an equal amount of requests. The requests must be distributed in a uniform manner across all the servers. The component which is responsible for distributing these incoming requests uniformly across the servers is known as Load Balancer. A Load Balancer acts as a layer between the incoming requests coming from the user and multiple servers present in the system. Routing-requests-through-Load-Balancer We should avoid the scenarios where a single server is getting most of the requests while the rest of them are sitting idle. There are various Load Balancing Algorithms that ensure even distribution of requests across the servers. Load-Balancer-Process

Hashing Approach to direct requests from the Load Balancer

We will be discussing the Hashing Approach to direct the requests to multiple servers uniformly. Suppose we have server_count as the Total number of servers present in the System and a load_balancer to distribute the requests among those servers. A request with an id request_id enters the system. Before reaching the destination server it is directed to the load_balancer from where it is further directed to its destination server. When the request reaches the load balancer the hashing approach will provide us with the destination server where the request is to be directed.

Discussing the Approach :

  • request_id : Request ID coming to get served
  • hash_func : Evenly distributed Hash Function
  • hashed_id : Hashed Request ID
  • server_count : Number of Servers

 

java




class GFG {
    public static int hash_func(int request_id)
    {
        // Computing the hash request id
        int hashed_id = 112;
        return hashed_id;
    }
 
    public static void route_request_to_server(int dest_server)
    {
        System.out.println("Routing request to the Server ID : " + dest_server);
    }
 
    public static int request_id = 23; // Incoming Request ID
    public static int server_count = 10; // Total Number of Servers
 
    public static void main(String args[])
    {
        int hashed_id = hash_func(request_id); // Hashing the incoming request id
        int dest_server = hashed_id % server_count; // Computing the destination server id
 
        route_request_to_server(dest_server);
    }
}


 

Computing the Destination Server address :

If the value of server_count is 10 i.e we have ten servers with following server ids server_id_0, server_id_1, ………, server_id_9. Suppose the value of request_id is 23 When this request reaches the Load Balancer the hash function hash_func hashes the value of the incoming request id.

  • hash_func(request_d) = hash_func(23)

Suppose after Hashing the request_id is randomly hashed to a particular value.

  • hashed_id = 112

In order to bring the hashed id in the range of the number of servers we can perform a modulo of the hashed id with the count of servers.

  • dest_server = hashed_id % server_count
  • dest_server = 112%10
  • dest_server = 2

So we can route this request to Server server_id_2 In this way we can distribute all the requests coming to our Load Balancer evenly to all the servers. But is it an optimal approach? Yes it distributes the requests evenly but what if we need to increase the number of our servers. Increasing the server will change the destination servers of all the incoming requests. What if we were storing the cache related to that request in its destination server? Now as that request is no longer routed to the earlier server, our entire cache can go in trash probably. Think !

Load balancing is a technique used to distribute incoming requests evenly across multiple servers in a network, with the aim of improving the performance, capacity, and reliability of the system. Load balancers act as a reverse proxy, routing incoming requests to different servers based on various algorithms and criteria.

Here’s how routing requests through a load balancer works:

Incoming requests are received by the load balancer, which acts as a single point of entry for all incoming traffic.
The load balancer uses an algorithm to determine which server should handle the request, based on factors such as the server’s current load, response time, and availability.
The load balancer forwards the request to the selected server.
The server processes the request and returns the response to the load balancer.
The load balancer returns the response to the client.
By routing requests through a load balancer, the system can improve its performance and capacity, as the load balancer ensures that incoming requests are distributed evenly across all available servers. This helps to avoid overloading any individual server and ensures that the system can continue to handle incoming requests even if one or more servers fail.

In a distributed system architecture, routing requests through a load balancer is a common technique to improve performance, scalability, and reliability. A load balancer acts as a traffic cop, distributing incoming requests across multiple servers to balance the load and prevent any one server from becoming overloaded.

Here are some key points to understand about routing requests through a load balancer:

  1. How it works: When a client sends a request to the system, it is first received by the load balancer. The load balancer then distributes the request to one of several servers in the system based on a predefined algorithm, such as round-robin, least connections, or IP hash. The server processes the request and sends the response back to the client through the load balancer.
  2. Load balancing algorithms: Load balancing algorithms are used by the load balancer to distribute requests across servers. The choice of algorithm can affect the performance and reliability of the system. For example, round-robin is a simple algorithm that evenly distributes requests across servers, while least connections distributes requests to the server with the fewest active connections.
  3. Scaling: Routing requests through a load balancer can help scale a system by allowing additional servers to be added to handle increased traffic. When a new server is added, the load balancer can automatically distribute requests to it.
  4. High availability: Routing requests through a load balancer can also improve system reliability by providing redundancy. If one server fails, the load balancer can automatically redirect requests to another server.

The advantages of routing requests through a load balancer include:

  1. Improved performance: By balancing the load across multiple servers, a load balancer can improve the response time of a system and prevent any one server from becoming overloaded.
  2. Increased scalability: A load balancer can help a system scale by allowing additional servers to be added to handle increased traffic.
  3. Improved reliability: A load balancer can improve the reliability of a system by providing redundancy and automatically redirecting requests to healthy servers if one server fails.
  4. Simplified management: A load balancer can simplify the management of a distributed system by allowing administrators to configure and manage multiple servers through a single interface.

However, there are also some potential disadvantages to routing requests through a load balancer, including:

  1. Cost: A load balancer can be expensive to implement and maintain, particularly for smaller systems.
  2. Complexity: A load balancer can add additional complexity to a system, particularly when configuring and managing multiple servers.
  3. Single point of failure: A load balancer can be a single point of failure for a system. If the load balancer fails, the entire system may become unavailable.
  4. Overall, routing requests through a load balancer is a powerful technique for improving the performance, scalability, and reliability of a distributed system. By understanding the key principles and potential advantages and disadvantages, developers can make informed decisions about when and how to use load balancing in their systems.

Reference books:

Load Balancing with HAProxy: Open-source technology for better scalability, redundancy and availability in your IT infrastructure by Nick Ramirez
Load Balancing with HAProxy: How to Balance Load for Web Applications and Improve Website Performance by Rajan Kumar
Load Balancer Fundamentals: An Introduction to Load Balancing and Load Balancer Types by Michael J. Williams.



Last Updated : 10 Apr, 2023
Like Article
Save Article
Previous
Next
Share your thoughts in the comments
Similar Reads