Open In App

What is Load Balancer & How Load Balancing works?

Last Updated : 07 Mar, 2024
Improve
Improve
Like Article
Like
Save
Share
Report

Load Balancer is defined as a networking device or software application that distributes and balances the incoming traffic among the servers to provide high availability, efficient utilization of servers, and high performance. A load balancer works as a “traffic cop” sitting in front of your server and routing client requests across all servers. It simply distributes the set of requested operations (database write requests, cache queries) effectively across multiple servers and ensures that no single server bears too many requests.

load-balancer

What is a Load Balancer?

A load balancer is a networking device or software application that distributes and balances the incoming traffic among the servers to provide high availability, efficient utilization of servers, and high performance.

  • Load balancers are highly used in cloud computing domains, data centers, and large-scale web applications where traffic flow needs to be managed.
  • The primary goal of using a load balancer is, not to overburden with huge incoming traffic which may lead to server crashes or high latency.

What will happen if there is NO Load Balancer?

Before understanding how a load balancer works, let’s understand what problem will occur without the load balancer through an example.

Consider a scenario where an application is running on a single server and the client connects to that server directly without load balancing.

Without-Load-Balancer

There are two main problems with this model:

  • Single Point of Failure: 
    • If the server goes down or something happens to the server the whole application will be interrupted and it will become unavailable for the users for a certain period. It will create a bad experience for users which is unacceptable for service providers.
  • Overloaded Servers: 
    • There will be a limitation on the number of requests that a web server can handle. If the business grows and the number of requests increases the server will be overloaded.
    • To solve the increasing number of requests we need to add a few more servers and we need to distribute the requests to the cluster of servers. 

Key characteristics of Load Balancers:

  1. Traffic Distribution: Load balancers evenly distribute incoming requests among multiple servers, preventing any single server from being overloaded.
  2. High Availability: By distributing traffic across multiple servers, load balancers enhance the availability and reliability of applications. If one server fails, the load balancer redirects traffic to healthy servers.
  3. Scalability: Load balancers facilitate horizontal scaling by easily accommodating new servers or resources to handle increasing traffic demands.
  4. Optimization: Load balancers optimize resource utilization, ensuring efficient use of server capacity and preventing bottlenecks.
  5. Health Monitoring: Load balancers often monitor the health of servers, directing traffic away from servers experiencing issues or downtime.
  6. SSL Termination: Some load balancers can handle SSL/TLS encryption and decryption, offloading this resource-intensive task from servers.

How Load Balancer Works?

Lets understand how Load Balancer works through the above discussed example:

To solve the above issue and to distribute the number of requests we can add a load balancer in front of the web servers and allow our services to handle any number of requests by adding any number of web servers in the network.

  • We can spread the request across multiple servers.
  • For some reason, if one of the servers goes offline the service will be continued.
  • Also, the latency on each request will go down because each server is not bottlenecked on RAM/Disk/CPU anymore.

How-Load-Balancer-works

Load balancers minimize server response time and maximize throughput. Load balancer ensures high availability and reliability by sending requests only to online servers Load balancers do continuous health checks to monitor the server’s capability of handling the request. Depending on the number of requests or demand load balancers add or remove the number of servers.

Types of Load Balancers

types-of-load-balancer2

There are mainly four typers of load balancers:

1. Software Load Balancers in Clients

As the name suggests all the logic of load balancing resides on the client application (Eg. A mobile phone app).  The client application will be provided with a list of web servers/application servers to interact with.

  • The application chooses the first one in the list and requests data from the server.
  • If any failure occurs persistently (after a configurable number of retries) and the server becomes unavailable, it discards that server and chooses the other one from the list to continue the process.
  • This is one of the cheapest ways to implement load balancing. 

2. Software Load Balancers in Services

These load balancers are the pieces of software that receive a set of requests and redirect these requests according to a set of rules. This load balancer provides much more flexibility because it can be installed on any standard device (Ex: Windows or Linux machine).

  • It is also less expensive because there is no need to purchase or maintain the physical device, unlike hardware load balancers.
  • You can have the option to use the off-the-shelf software load balancer or you can write your custom software (Ex: load balances Active Directory Queries of Microsoft Office365) for load balancing.

3. Hardware Load Balancers

As the name suggests we use a physical appliance to distribute the traffic across the cluster of network servers. These load balancers are also known as Layer 4-7 Routers and these are capable of handling all kinds of HTTP, HTTPS, TCP, and UDP traffic. HLDs provide a virtual server address to the outside world.

  • When a request comes from a client application, it forwards the connection to the most appropriate real server doing bi-directional network address translation (NAT).
  • HLDs can handle a large volume of traffic but it comes with a hefty price tag and it also has limited flexibility.
  • HLDs keep doing the health checks on each server and ensure that each server is responding properly.
  • If any of the servers don’t produce the desired response,  it immediately stops sending the traffic to the servers.
  • These load balancers are expensive to acquire and configure, which is the reason a lot of service providers use them only as the first entry point for user requests. Later the internal software load balancers are used to redirect the data behind the infrastructure wall. 

Software vs. Hardware Load Balancers: Which one to choose?

The choice between software and hardware load balancers depends on various factors such as the scale of your application, budget constraints, and specific performance requirements. Small to medium-sized enterprises might find software load balancers more cost-effective and flexible, while larger enterprises with high traffic loads might opt for the dedicated power of hardware load balancers.

4. Virtual Load Balancers

A virtual load balancer is a type of load balancing solution implemented as a virtual machine (VM) or software instance within a virtualized environment ,such as data centers utilizing virtualization technologies like VMware, Hyper-V, or KVM.. It plays a crucial role in distributing incoming network traffic across multiple servers or resources to ensure efficient utilization of resources, improve response times, and prevent server overload.

Important read: What are Layer-4(L4), Layer-7(L7), and GSLB load balancers?

Load Balancing Algorithms

llba

We need a load-balancing algorithm to decide which request should be redirected to which backend server. The different system uses different ways to select the servers from the load balancer. Companies use varieties of load-balancing algorithm techniques depending on the configuration. Load balancing algorithms can be broadly categorized into two types: Dynamic load balancing and Static load balancing.

1. Static Load Balancing Algorithms

Static load balancing involves predetermined assignment of tasks or resources without considering real-time variations in the system. This approach relies on a fixed allocation of workloads to servers or resources, and it doesn’t adapt to changes during runtime.

Types of Static Load Balancing Algorithms

  1. Round Robin
  2. Weighted Round-Robin
  3. Source IP hash

2. Dynamic Load Balancing Algorithms

Dynamic load balancing involves making real-time decisions about how to distribute incoming network traffic or computational workload across multiple servers or resources. This approach adapts to the changing conditions of the system, such as variations in server load, network traffic, or resource availability.

Types of Dynamic Load Balancing Algorithms

  1. Least Connection Method
  2. Least Response Time Method

The choice between dynamic and static load balancing depends on the characteristics of the system, the nature of the workload, and the desired level of adaptability. Dynamic load balancing is often favored in dynamic, high-traffic environments, while static load balancing may be suitable for more predictable scenarios.

Benefits of using a Load Balancer

  1. Increases performance:
    • Any web server when given huge traffic may not perform well and can give down time to user and thereby degrading the performance.
    • However, Load Balancer makes sure user experience no down time and gets better performance.
  2. Increase Scalability:
    • Load balancer along with auto scaling will make sure that if your minimum number of servers are getting high traffic then more servers will be provisioned and load balancer will automatically accommodate in the server cluster.
  3. Efficiently manages failure:
    • Load balancer makes sure that any server that is experiencing issue or is not healthy to serve user request are been kept away from the distribution.
  4. Prevent Traffic Bottleneck:
    • Software load balancer predicts if there is going to be huge traffic rush to the servers and thus informs or warns us for taking appropriate measure.
  5. Efficient Resource Utilization:
    • Load balancers distribute incoming requests or tasks across multiple servers, ensuring that each server handles an appropriate share of the workload.
  6. Maintaining User Sessions:
    • Load balancers can be configured for session persistence, ensuring that user sessions are maintained even when requests are directed to different servers.
    • This is essential for applications that require stateful communication.
  7. High Availability:
    • Load balancers enhance the availability of applications by distributing traffic across multiple servers. If one server fails, traffic is redirected to healthy servers, minimizing downtime.
  8. Fault Tolerance:
    • Load balancers provide fault tolerance by redirecting traffic away from failed or unhealthy servers, maintaining the continuity of services.
  9. SSL Termination:
    • Load balancers can handle SSL/TLS encryption and decryption, offloading this computationally intensive task from servers and improving overall efficiency.

Cons/Drawbacks of Load Balancers:

  1. Single Point of Failure:
    • While load balancers enhance fault tolerance, they can become a single point of failure. If the load balancer itself experiences issues, it may disrupt traffic distribution.
  2. Complexity and Cost:
    • Implementing and managing load balancers can introduce complexity, and high-quality load balancing solutions may come with a cost. This includes both hardware and software load balancers.
  3. Configuration Challenges:
    • Configuring load balancers correctly can be challenging, especially when dealing with complex application architectures or diverse server environments.
  4. Potential for Overhead:
    • Depending on the load balancing algorithm and configuration, there can be additional overhead in terms of latency and processing time, although modern load balancers are designed to minimize this impact.
  5. SSL Inspection Challenges:
    • When SSL termination is performed at the load balancer, it may introduce challenges related to SSL inspection and handling end-to-end encryption.
  6. Learning Curve:
    • Administrators and developers may need to invest time in understanding and configuring load balancers, especially for more advanced features and settings.

While the benefits of load balancers significantly outweigh the drawbacks, it’s important to carefully plan their implementation, considering the specific needs and characteristics of the application or service being load balanced.

Conclusion

In conclusion, a load balancer serves as a pivotal component in modern computing architectures, providing numerous benefits for the efficient and reliable operation of applications and services. By distributing incoming traffic across multiple servers, load balancers optimize resource utilization, enhance performance, and ensure high availability.



Like Article
Suggest improvement
Previous
Next
Share your thoughts in the comments

Similar Reads