Open In App

Performance Optimization of Distributed System

Last Updated : 15 Mar, 2022
Improve
Improve
Like Article
Like
Save
Share
Report

The term “distributed systems” refers to a group of several components situated on various machines that interact and coordinate actions to appear to the end-user as a single system. This article will go through the concept of how we can optimize the performance of Distributed Systems in detail.

Performance Optimization of Distributed Systems:

The following are the parameters that should be taken care of for optimizing performance in Distributed Systems:

Performance Optimization of Distributed Systems

  • Serving Multiple Requests Simultaneously: While a server waits for a momentarily unavailable resource, the main issue is a delay. A server invokes a remote function that requires a lot of computation–or has a significant transmission latency. To avoid this, multithreading can accept and process other requests.
  • Reducing Per-Call Workload of Servers: A server’s performance can be quickly impacted by a large number of client requests. Each request necessitates a significant amount of processing on the server’s part. So, keep requests brief and the amount of work a server has to do for each request to a minimum. Use servers that are not tied to any one state i.e. use stateless servers.
  • Reply Caching of Idempotent Remote Procedures: If a server is unable to handle the client requests because of the difference in pace between them. The requests are arriving at a higher rate than the server can tackle. As a result, a backlog starts building up due to the unhandled client requests at the same pace. So, in this case, the server uses its reply cache for sending the response.
  • Timeout Values Should Be Carefully Chosen: A timeout that is “too small” may expire too frequently, resulting in unnecessary retransmissions. If communication is genuinely lost, a “too large” timeout setting will cause an unnecessarily long delay.
  • Appropriate Design of RPC Protocol Specifications: The protocol specifications should be well designed to bring down the amount of transferring data over the network and also the rate (frequency) with which it is sent.
  • Using LRPC (Lightweight Remote Procedure Call) for Cross-Domain Messaging: LRPC (Lightweight Remote Procedure Call) facility is used in microkernel operating systems for providing cross-domain (calling and called processes are both on the same machine) communication. It employs following the approaches for enhancing the performance of old systems employing Remote Procedure Call:
  • Simple Control Transfer: In this approach, a control transfer procedure is used that refers to the execution of the requested procedure by the client’s thread in the server’s domain. It employs hand-off scheduling in which direct context switching takes place from the client thread to the server thread. Before the first call is made to the server, the client binds to its interface, and afterward, it provides the server with the argument stack and its execution thread for trapping the kernel. Now, the kernel checks the caller and creates a call linkage, and sends off the client’s thread directly to the server which in turn activates the server for execution. After completion of the called procedure, control and results return through the kernel from where it is called.
  • Simple Data Transfer: In this approach,  a shared argument stack is employed to avoid duplicate data copying. Shared simply refers to the usage by both the client and the server. So, in LRPC the same arguments are copied only once from the client’s stack to the shared argument stack. It leads to cost-effectiveness as data transfer creates few copies of data when moving from one domain to another.
  • Simple Stub: Because of the above mechanisms, the generation of the highly optimized stubs is possible using LRPC. The call stub is associated with the client’s domain and the entry stub is associated with the server’s domain is having an entry stub in every procedure. The LRPC interface for every procedure follows a three-layered communication protocol:
    •  From end to end: communication is carried out as defined by calling conventions
    •  stub to stub: requires the usage of stubs
    •  domain-to-domain:  requires kernel implementation
  • The benefit of using LRPC stubs is that cost for interlayer gets reduced as it makes the boundaries blurry. The single requirement in a simple LRPC is that one formal procedure call to client stub and one return from server procedure and client stub should be made.
  • Design for Concurrency: For achieving high performance in terms of high call throughput and low call latency, multiple processors are used with shared memory. Further, throughput can be increased by getting rid of unnecessary lock contention and reducing the utilization of shared-data structures, while latency is lowered by decreasing the overhead of context switching. The factor-by-3 performance is achieved using LRPC. The cost involved in cross-domain communication is also gets reduced.

Like Article
Suggest improvement
Previous
Next
Share your thoughts in the comments

Similar Reads