Open In App

Using Nginx As HTTP Load Balancer

Last Updated : 05 Mar, 2024
Improve
Improve
Like Article
Like
Save
Share
Report

Load balancing is a technique used in modern application servers for fault-tolerant systems. The load balancer is a networking device that splits network traffic across multiple backend servers so that the load on each server becomes constant making the overall system persistent and fault-tolerant. Nginx server is an enterprise-grade web server that is widely used across the industry. In this article, we will see how to configure nginx for HTTP load balancing.

Primary Terminologies

  • Load Balancer: A load balancer is a device that distributes incoming network traffic across multiple servers. The goal is to ensure no single server bears too much demand, preventing performance degradation and improving overall availability and reliability.
  • Upstream Servers: These are the backend servers that receive traffic from the load balancer. Nginx distributes incoming requests among these servers based on the configured load-balancing algorithm.

Using Nginx as HTTP load balancer:

NOTE: For this article, we will create Virtual Machines in Azure for use as servers. You can also use local machines for configuration.

Step 1: Create and set up servers.

  • Create virtual machines in Azure. We will create 3 machines.

Virtual Machines

  • One machine will act as load balancing server while other two will act as backend servers.

Creating Virtual Machines

  • Allow port 22 and port 80 for access.

Selecting Inbound Ports

  • Add username and password . Leave everything else as default and review then create.
  • Once all three machines are ready proceed with next steps.

Step 2: Install and Configure Nginx.

  • SSH on each machine through IP address.

SSH To VM

  • Update application directory on each machine.
sudo apt update

Update the Package manager

  • Once updating is done install Nginx server using below command.
sudo apt install nginx

Install Plugin

  • You can check the status of nginx using below command.
systemctl status nginx

Nginx Status

  • You can also verify the installation by hitting public ip address in browser. You should see nginx default landing page.

Nginx

Step 3: Configure Nginx Web pages.

  • Let’s add some informative text to each nginx web page to identify each machine.
  • Run below command as a root user. Switch user using below command.
sudo su
  • Go to /var/www/html directory and open index.html page which will be available by default.
nano index.debian.html
  • Remove extra lines from body part of html and put some informative message to identify each server.
<h1>Hello From Server 1</h1>

Routing the Server

  • Configure all the machines as above.
  • Now hit the public of each machine you should see message that you have configured.

Accessing the Pages

Server 1

Screenshot-(351)

Step 4: Configure Load balancer.

  • On machine where you want to configure load balancer open nginx.conf file.
  • Go to /etc/nginx and open nginx.conf in your favourite editor.
nano nginx.conf
  • In http block comment below two lines.
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;

Edit the Config Files

  • Now add upstream backend block which will be used to specify load balancing algorithm and backend server.
  • We will be using least_conn algorithm for load balacing. The block should look like below.
upstream backend {
least_conn;
server <PUBLIC IP OF SERVER 1>;
server <PUBLIC IP OF SERVER 2>;
}
  • Now add server block which will pass the load balancer traffic to backend servers.
server {
listen 80;
server_name <PUBLIC IP OF LOAD BALANCER>;

location / {
proxy_pass http://backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
  • Save and close the configuration file.

Configuration files

  • Restart the nginx server after configuring.
systemctl restart nginx

Restart the Nginx

Step 5 : Test the load balancer

  • Now hit the load balancer IP in browser and you should see Server 1 message.
  • Refresh the page and you will see Server 2 message.
  • This will continue alternatively as load balancer will split the traffic across server 1 and 2.

Server 2

Conclusion

Thus, we have configured HTTP load balancer with the help of NGINX server. We have split the traffic across two backend server which can be edited by adding more backend server .More configuration can be added to configure load balancer for other purposes.

Using nginx as HTTP load balancer – FAQ’s

Are there alternatives to Nginx for HTTP load balancing?

Yes, there are alternative load balancing solutions, including HAProxy, Apache HTTP Server with mod_proxy, and cloud-specific load balancers provided by cloud service providers (e.g., AWS Elastic Load Balancing, Azure Load Balancer).

Can I use Nginx as a load balancer in a microservices architecture?

Yes, Nginx is well-suited for load balancing in a microservices environment. It can be used to distribute traffic among multiple microservices, providing flexibility and scalability in handling diverse workloads.

How do I monitor and troubleshoot Nginx load balancing?

Nginx provides various log files for monitoring and diagnostics. Additionally, external monitoring tools, Nginx status modules, and access to error logs can be valuable for identifying and resolving issues. Regularly reviewing logs and metrics is essential for effective troubleshooting.

Is Nginx suitable for large-scale deployments and high traffic websites?

Yes, Nginx is renowned for its ability to handle a large number of concurrent connections and high traffic volumes. It is widely used by websites and applications with high traffic demands due to its efficiency and low resource usage.

How does Nginx handle load balancing?

Nginx uses load balancing algorithms (e.g., round-robin, least_conn, ip_hash) to distribute incoming requests among a group of backend servers defined in the configuration. The proxy_pass directive is commonly used to forward requests to the specified upstream group.



Like Article
Suggest improvement
Previous
Next
Share your thoughts in the comments

Similar Reads