Open In App

Lightweight Remote Procedure Call in Distributed System

Last Updated : 14 Mar, 2022
Improve
Improve
Like Article
Like
Save
Share
Report

Lightweight Remote Procedure Call is a communication facility designed and optimized for cross-domain communications in microkernel operating systems. For achieving better performance than conventional RPC systems, LRPC uses the following four techniques: simple control transfer, simple data transfer, simple stubs, and design for concurrency.

In this article, we will go through the concept of Lightweight Remote Procedure Call (RPC) in distributed systems in detail.

Lightweight RPCs are a type of RPC in which the caller and called processes are both running on the same computer. In a distributed system, there are two types of communication:

  • Message Passing: In this type of communication, processes communicate by message passing on the heterogeneous platforms over a network. Thus, it handles cross-machine traffic.

Message Passing in LRPC

  • Shared Memory: In this type, memory cannot be shared physically rather shared memory is distributed that permits the end-user processes to access shared data without using inter-process communication. Thus, it handles cross-domain (domains on the same machine) traffic.

Shared Memory in LRPC

If both processes are using a shared memory location and reside on the same machine, you might want to avoid utilizing an RPC system. So, it is to be optimized in this manner that the message can be built as a buffer and then simply written to the shared memory area.

The following considerations can help it outperform the standard RPC approach when the client and server are both executing on the same machine and you are making RPC calls between two components on the same machine.

  • There is no need for marshaling in this situation.
  • We can eliminate the need for explicit message passing. Rather are memory is employed as a means of communication.
  • The stub can utilize the run-time flag to determine if TCP/IP or shared memory should be used.
  • It is not necessary to use eXternal Data Representation (XDR).

Steps for LRPC Execution:

  • The caller process’ arguments are pushed to the top of the stack.
  • A trap is sent to the kernel.
  • When the trap is sent to the kernel, it either creates an explicitly shared memory region and places the arguments there, or it takes a page from the stack and converts it to a shared page.
  • Procedure executed by the client thread (OS upcall), when the work is finished, the thread traps in the kernel.
  • The kernel updates the address space again and transfers control to the client.

LRPC is a secure and transparent communication protocol that can be used by microkernel operating systems. For saving expenses that are incurred on RPC calls, it is employed in small-kernel operating systems.

LRPC employs the following strategies to improve the performance of traditional RPC systems:

  • Simple Control Transfer
  • Simple Data Transfer
  • Simple stub
  • Design for Concurrency

 

  • Simple Control Transfer: In this strategy, the requested procedure is run by the client in the domain of the server. A thread scheduling mechanism known as handoff scheduling is required for a straight context switch from the client thread to the server thread of an LRPC. In the following manner, the client binds to a server interface, before making its first call. The client provides an argument stack as well as its thread of execution while calling the server, which ends up in a kernel trap. Now, verification is carried out by kernel for the caller, then generating a call linkage, and sends the client’s thread directly to the server domain, which triggers server processing. After the completion of the called procedure, the control and results are returned to the point of the client’s call.
  • Simple Data Transfer: It is a parameter-passing method similar to a procedure call, but with a shared argument stack to avoid duplicate data copying. The messaging mechanism is employed for passing arguments and results. The requirement is to copy 4 times the data in a cross-domain RPC Argument copy:
    • RPC message stub,
    • Kernel message from the client
    • From the kernel to the server
    • Stacking server
  • Due to the usage of a shared-argument stack by LRPC, this operation is reduced to some extent as shared refers to the accessibility by both the client and the server. That implies, in LRPC the copying of the same arguments can be only made once, from the client’s stack to the shared argument stack. The cost also gets reduced as data movement makes fewer copies of data during the transfer from one domain to another. The paired allocation of argument stacks allows LRPC to offer a secure route between the client and the server.
  • Simple stub: The highly optimized stubs can be easily generated using LRPC because control and data transfer mechanisms are employed.  The client’s domain is having a call stub and the server’s domain is having an entry stub in every procedure.  A three-layered communication protocol is followed by every procedure in an LRPC interface:
    • From end to end, as defined by calling conventions
    • Implemented by stubs, stub to stub
    • Kernel-implemented domain-to-domain communication
  • LRPC stubs make the boundaries blurry between the protocol layers so that cost of interlayer crossings can be reduced. The only requirement is a simple LRPC is that one formal procedure call to client stub and one return from server procedure and client stub should be made.
  • Design for Concurrency: To gain high call throughput and low call latency, special methods are employed in LRPCs with shared memory and multiple processors. The actual effectiveness is raised by eliminating unnecessary lock contention and minimizing the utilization of shared-data structures, while latency is lowered by lowering the overhead of context switching.  It results in better throughput. There is improvement in performance by using LRPC by a factor of three and also lowers the cost of cross-domain communication.

Like Article
Suggest improvement
Previous
Next
Share your thoughts in the comments

Similar Reads