Open In App

How to create load balancing servers using Node.js ?

Last Updated : 28 Feb, 2023
Improve
Improve
Like Article
Like
Save
Share
Report

If your website or application doesn’t receive many requests, you don’t have to use load balancing, but when it becomes very popular and starts to receive a high amount of traffic may your underlying server can not handle it. Because a single NodeJS server is not flexible for handling a very large amount of traffic. 

Adding more machines can solve this issue. But in order to share traffic to your all application server, a load balancer is required.

Load balancer: A load balancer acts as the traffic cop sitting in front of your application servers and routing client requests across all servers capable of fulfilling those requests in a manner that maximizes speed and capacity utilization and ensures that no one server is overworked, which could degrade performance.

How to setup load balancing server?

1. Using Cluster Module: NodeJS has a built-in module called Cluster Module to take the advantage of a multi-core system. Using this module you can launch NodeJS instances to each core of your system. Master process listening on a port to accept client requests and distribute across the worker using some intelligent fashion. So, using this module you can utilize the working ability of your system.

The following example covers the performance difference by using and without using the Cluster Module.

Without Cluster Module: 

Make sure you have installed the express and crypto module using the following command:

npm install express crypto

index.js

Javascript




const { generateKeyPair } = require('crypto');
const app = require('express')();
 
// API endpoint
// Send public key as a response
app.get('/key', (req, res) => {
  generateKeyPair('rsa', {
    modulusLength: 2048,
    publicKeyEncoding: {
      type: 'spki',
      format: 'pem'
    },
    privateKeyEncoding: {
      type: 'pkcs8',
      format: 'pem',
      cipher: 'aes-256-cbc',
      passphrase: 'top secret'
    }
  }, (err, publicKey, privateKey) => {
 
    // Handle errors and use the
    // generated key pair.
    res.send(publicKey);
  })
})
 
app.listen(3000, err => {
  err ?
    console.log("Error in server setup") :
    console.log('Server listening on PORT 3000')
});


 
Run the index.js file using the following command:

node index.js

Output: We will see the following output on the terminal screen:

Server listening on PORT 3000

Now open your browser and go to http://localhost:3000/key, you will see the following output:

—–BEGIN PUBLIC KEY—– MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAwAneYp5HlT93Y3ZlPAHjZAnPFvBskQKKfo4an8jskcgEuG85KnZ7/16kQw2Q8/7Ksdm0sIF7qmAUOu0B773X 1BXQ0liWh+ctHIq/C0e9eM1zOsX6vWwX5Y+WH610cpcb50ltmCeyRmD5Qvf+OE/C BqYrQxVRf4q9+029woF84Lk4tK6OXsdU+Gdqo2FSUzqhwwvYZJJXhW6Gt259m0wD YTZlactvfwhe2EHkHAdN8RdLqiJH9kZV47D6sLS9YG6Ai/HneBIjzTtdXQjqi5vF Y+H+ixZGeShypVHVS119Mi+hnHs7SMzY0GmRleOpna58O1RKPGQg49E7Hr0dz8eh 6QIDAQAB —–END PUBLIC KEY—–

The above code listening on port 3000 and send Public Key as a response. Generating an RSA key is CPU-intensive work. Here only one NodeJS instance working in a single core. To see the performance, we have used autocannon tools to test our server as shown below:

The above image showed that the server can respond to 2000 requests when running 500 concurrent connections for 10 seconds. The average request/second is 190.1 seconds. 

Using Cluster Module:

Javascript




const express = require('express');
const cluster = require('cluster');
const { generateKeyPair } = require('crypto');
 
// Check the number of available CPU.
const numCPUs = require('os').cpus().length;
 
const app = express();
const PORT = 3000;
 
// For Master process
if (cluster.isMaster) {
  console.log(`Master ${process.pid} is running`);
 
  // Fork workers.
  for (let i = 0; i < numCPUs; i++) {
    cluster.fork();
  }
 
  // This event is first when worker died
  cluster.on('exit', (worker, code, signal) => {
    console.log(`worker ${worker.process.pid} died`);
  });
}
 
// For Worker
else {
  // Workers can share any TCP connection
  // In this case it is an HTTP server
  app.listen(PORT, err => {
    err ?
      console.log("Error in server setup") :
      console.log(`Worker ${process.pid} started`);
  });
 
  // API endpoint
  // Send public key
  app.get('/key', (req, res) => {
    generateKeyPair('rsa', {
      modulusLength: 2048,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem'
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret'
      }
    }, (err, publicKey, privateKey) => {
 
      // Handle errors and use the
      // generated key pair.
      res.send(publicKey);
    })
  })
}


 
 

Run the index.js file using the following command:
 

node index.js

Output: We will see the following output on terminal screen:

Master 16916 is running
Worker 6504 started
Worker 14824 started
Worker 20868 started
Worker 12312 started
Worker 9968 started
Worker 16544 started
Worker 8676 started
Worker 11064 started

Now open your browser and go to http://localhost:3000/key, you will see the following output:

—–BEGIN PUBLIC KEY—– MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAzxMQp9y9MblP9dXWuQhf sdlEVnrgmCIyP7CAveYEkI6ua5PJFLRStKHTe3O8rxu+h6I2exXn92F/4RE9Yo8EOnrUCSlqy9bl9qY8D7uBMWir0I65xMZu3rM9Yxi+6gP8H4CMDiJhLoIEap+d9Czr OastDPwI+HF+6nmLkHvuq9X5aORvdiOBwMooIoiRpHbgcHovSerJIfQipGs74IiR 107GbpznSUxMIuwV1fgc6mAULuGZl+Daj0SDxfAjk8KiHyXbfHe5stkPNOCWIsbAtCbGN0bCTR8ZJCLdZ4/VGr+eE0NOvOrElXdXLTDVVzO5dKadoEAtzZzzuQId2P/z JwIDAQAB —–END PUBLIC KEY—–

The above NodeJS application is launched on each core of our system. Where master process accepts the request and distributes across all worker. The performed in this case is shown below:

The above image showed that the server can respond to 5000 requests when running 500 concurrent connections for 10 seconds. The average request/second is 162.06 seconds.

So, using the cluster module you can handle more requests. But, sometimes it is not enough, if this is your case then your option is horizontal scaling.

2. Using Nginx: If your system has more than one application server to respond to, and you need to distribute client requests across all servers then you can smartly use Nginx as a proxy server. Nginx sits on the front of your server pool and distributes requests using some intelligent fashion. 

In the following example, we have 4 instances of the same NodeJS application on different ports, also you can use another server.

The file name is index.js

Javascript




const app = require('express')();
 
// API endpoint
app.get('/', (req,res)=>{
    res.send("Welcome to GeeksforGeeks !");
})
 
// Launching application on several ports
app.listen(3000);
app.listen(3001);
app.listen(3002);
app.listen(3003);


Now install Nginx on your machine and create a new file in /etc/nginx/conf.d/ called your-domain.com.conf with the following code in it. 

upstream my_http_servers {
    # httpServer1 listens to port 3000
    server 127.0.0.1:3000;

    # httpServer2 listens to port 3001
    server 127.0.0.1:3001;

    # httpServer3 listens to port 3002
    server 127.0.0.1:3002;

    # httpServer4 listens to port 3003
    server 127.0.0.1:3003;
}
server {
    listen 80;
    server_name your-domain.com www.your-domain.com;
    location / {
        proxy_set_header   X-Real-IP $remote_addr;
        proxy_set_header   Host      $http_host;
        proxy_pass         http://my_http_servers;
    }
}

3. Using Express Web Server: There is a lot of advantage to an Express web server. If you are comfortable with NodeJS, you can implement your own Express base load balancer as shown in the following example.

Step 1: Create an empty NodeJS application.

mkdir LoadBalancer
cd LoadBalancer
npm init -y

Step 2: Install required dependencies like ExpressJS, axios, and Concurrently using the following command.

npm i express axios
npm i concurrently -g

Step 3: Create two file config.js for the load balancer server and index.js for the application server.

Here filename is config.js

Javascript




const express = require('express');
const path = require('path');
const app = express();
const axios = require('axios');
 
// Application servers
const servers = [
]
 
// Track the current application server to send request
let current = 0;
 
// Receive new request
// Forward to application server
const handler = async (req, res) =>{
 
    // Destructure following properties from request object
    const { method, url, headers, body } = req;
 
    // Select the current server to forward the request
    const server = servers[current];
 
    // Update track to select next server
    current === (servers.length-1)? current = 0 : current++
 
    try{
        // Requesting to underlying application server
        const response = await axios({
            url: `${server}${url}`,
            method: method,
            headers: headers,
            data: body
        });
        // Send back the response data
        // from application server to client
        res.send(response.data)
    }
    catch(err){
        // Send back the error message
        res.status(500).send("Server error!")   
    }
}
 
// Serve favicon.ico image
app.get('/favicon.ico', (req, res
    ) => res.sendFile('/favicon.ico'));
 
// When receive new request
// Pass it to handler method
app.use((req,res)=>{handler(req, res)});
 
// Listen on PORT 8080
app.listen(8080, err =>{
    err ?
    console.log("Failed to listen on PORT 8080"):
    console.log("Load Balancer Server "
          + "listening on PORT 8080");
});


 
 Here, the filename is index.js

Javascript




const express = require('express');
const app1 = express();
const app2 = express();
 
// Handler method
const handler = num => (req,res)=>{
    const { method, url, headers, body } = req;
    res.send('Response from server ' + num);
}
 
// Only handle GET and POST requests
// Receive request and pass to handler method
app1.get('*', handler(1)).post('*', handler(1));
app2.get('*', handler(2)).post('*', handler(2));
 
// Start server on PORT 3000
app1.listen(3000, err =>{
    err ?
    console.log("Failed to listen on PORT 3000"):
    console.log("Application Server listening on PORT 3000");
});
 
// Start server on PORT 3001
app2.listen(3001, err =>{
    err ?
    console.log("Failed to listen on PORT 3001"):
    console.log("Application Server listening on PORT 3001");
});


Explanation: The above code starts with 2 Express apps, one on port 3000 and another on port 3001. The separate load balancer process should alternate between these two, sending one request to port 3000, the next request to port 3001, and the next one back to port 3000.

Step 4: Open a command prompt on your project folder and run two scripts parallel using concurrently.

concurrently "node config.js" "node index.js" 

Output: 

We will see the following output on the console:

Now, open a browser and go to http://localhost:8080/ and make a few requests, we will see the following output:



Like Article
Suggest improvement
Previous
Next
Share your thoughts in the comments

Similar Reads