Open In App

Non Blocking Server in Java NIO

Improve
Improve
Like Article
Like
Save
Share
Report

Java NIO(New Input/Output) is high-performance networking and file handling API and structure which works as an alternative IO API for Java. It is introduced from JDK 4 which works as the second I/O system after standard Java IO with some added advanced features. It provides enhanced support for file system features and file handling. Due to the capabilities supported by the NIO file classes, it is widely used in file handling. The java.nio package defines the buffer classes which are used throughout NIO APIs. It was developed to allow Java programmers to implement high-speed I/O without using the custom native code. 

The primary features of Java NIO are:

  1. Java NIO is an asynchronous IO or non-blocking IO. For instance, a thread needs some data from the buffer. While the channel reads data into the buffer, the thread can do something else. Once data is read into the buffer, the thread can then continue processing it.
  2. Java NIO has a buffer-oriented approach. The data is read into a buffer and cached there. Whenever the data is required, it is further processed from the buffer.

Java NIO was created on three main components: Buffers, Channels, and Selectors.

  1. Buffers: Buffer is a block of memory used to temporarily store data while it is being moved from one place to another.
  2. Channels: Channel represents a connection to objects that are capable of performing I/O operations, such as files and sockets.
  3. Selectors: They are used to select the channels which are ready for the I/O operation.

Blocking vs Non-Blocking Servers

A Non-Blocking server means that it is able to have multiple requests in progress at the same time by the same process or thread because it uses Non-Blocking I/O. Let’s understand it with a simple example as follows:  

A blocking process is just like a queue at ticket counters. Each customer is one after another, no one advances until the person at the front of the queue has been served. While a non-blocking process could be thought of as a waiter at a restaurant, trying to serve the people at once, cycling through them, and handling their meals.

A. Blocking Servers

A server using a blocking socket works in a synchronous manner, which means whenever it gets a request it completes that and then serves for the other. Here, multi-threading comes into the picture. Suppose there are three requests in the queue, then the server will respond to the first one and then the other, and so on. It makes the clients wait for a longer time in order to achieve the desired goal. As shown in the picture, multiple threads are created in order to serve each and every request which is a very CPU-intensive task.

Blocking Servers in Java

Picture depicting Server using Blocking Socket

The system calls that suspend or put the calling process on wait until the event (on which the call was blocked) occurs after which the blocked process is woken up and is ready for execution are blocking system call. For example, once a call is invoked by a thread, the mode of execution is transferred to the kernel, and the call blocks until the completion of a certain event, the particular calling thread executing in the kernel mode is blocked and is put on some waiting queue. Once the event completes, the thread wakes up and is again put on the ready queue of the CPU.

B. Non-Blocking Servers

In the Non-Blocking approach, one thread can handle multiple queries at a time. A server using a Non-Blocking socket works in an asynchronous manner means once the request is received it lets the system perform the task on it, and continues to take other requests and once the task is over it responds according to each request. As shown in the picture it only has one thread and the whole process is carried out using a concept called “Event Loop”. Here the system is not idle and never waits for each and every task to get completed before taking other requests.

Non-Blocking Servers in Java

Picture depicting Server using Non-Blocking Socket

For example, take the read() system call that reads data from a file into the buffer allocated by the user program. The call returns into userspace only after the user buffer has been filled with the desired data. The thread is essentially put to sleep only to be later awakened when disk control has completed the I/O from the storage device and put the data into the kernel space buffer. 

Tip: The non-blocking IO is implemented in the java.nio.package using which we can create Non-Blocking servers.

Hypothetical Server’s Read/Write Operations 

As discussed above in the non-blocking model, a single process is able to handle multiple concurrent requests by interleaving non-blocking IO calls for all the requests. A non-blocking read doesn’t make the process wait for the other end to send data, it immediately returns and indicates there is nothing to read yet. Therefore it can accept a new request, read from another client, or check if the database has returned results yet.

Blocking Model

Let’s first take the example of the Blocking model in which a single thread/process inside the server handles two requests:

  1. request A connects
  2. thread accepts connection A
  3. read request A
  4. request B connects
  5. still reading from A.
  6. parse request A
  7. fetching data from the database for A
  8. write results to A
  9. close connection A
  10. thread accepts connection B
  11. reading from B.
  12. parse request B
  13. fetching data from the database for B
  14. write results to B
  15. close connection B

Here request B actually connects in step 4, but its connection cannot be accepted or parsed until step 10. The connection is accepted only after request A is fulfilled and its connection gets terminated. The attention to request A is not interrupted to handle request B until request A is fulfilled. Here the process can only handle a single request at a time.

Non-Blocking Model

Now let’s take the example of the Non-Blocking model in which a single thread/process inside the server handles two requests:

  1. request A connects
  2. thread accepts connection A
  3. read request A
  4. request B connects
  5. thread accepts connection B
  6. still reading from A
  7. read request B
  8. parse request A
  9. still reading from B
  10. fetching data from the database for A
  11. write results to A
  12. parse request B
  13. fetching data from the database for B
  14. close connection A
  15. write results to B
  16. close connection B

Here request B connects in step 4 and it is immediately accepted in step 5 by the thread. Now the processing of request B is interleaved with the processing of request A. Here the process can handle two requests at a time it allocates spaces for two connections and two requests. 

Java NIO allows managing multiple channels using only a single thread. Java NIO channel connects buffer and an entity at another end. In other words, channels are used to read data to buffer and also write data from the buffer. Java NIO channel supports the asynchronous flow of data both in blocking and non-blocking mode. 

SocketChannel and ServerSocketChannel are the two classes that implemented the Java NIO channel. It can read and 
write the data over TCP connections.

Composition of Non-Blocking Servers 

Non-Blocking servers internally have a non-blocking IO pipeline which is a chain of components that process both reading and writing IO in a non-blocking fashion. 

Composition of Non-Blocking Servers

Picture depicting the design of Non-Blocking IO Pipeline

As shown in the above image, a component uses a Selector to check when a Channel has data to read. The component reads the input data and generates output based on the given input. The output is written to a Channel again. A non-blocking IO pipeline either performs both read/write operations or may only read or write data. The component initiates the reading of data from the Channel via the Selector. As Java NIO performs the non-blocking IO operations, selectors and the selection keys with selectable channels define the multiplexed IO operations.

The Non-Blocking IO pipelines read data from a socket or file and split that data into logically ordered or integrated messages. This is similar to breaking a stream of data into tokens for parsing using the StreamTokenizer class in Java. Whereas a Blocking IO pipeline uses an InputStream-like interface where only one byte at a time can be read from the Channel, and blocks until there is data ready to read.

The Non-Blocking model uses Java NIO Selector to check and provide only those SelectableChannel instances that actually have some data to read so as to avoid checking streams that have 0 bytes to read. 

Non-Blocking Pipelines

In general, we can assume a non-blocking server winds up with three “pipelines” that are executed repeatedly in a loop:

  1. The read pipeline checks for new incoming data or any new full messages from the open connections.
  2. The processing pipeline processes any incoming full messages.
  3. The write pipeline checks if it can write any outgoing messages to any of the open connections.

If there are no messages lined up write pipeline can be skipped, or if there are no new incoming data or any new full messages, then the processing pipeline can be skipped.



Last Updated : 09 May, 2022
Like Article
Save Article
Previous
Next
Share your thoughts in the comments
Similar Reads