Open In App

Virtualization in Distributed System

Last Updated : 05 Dec, 2022
Improve
Improve
Like Article
Like
Save
Share
Report

A virtual resource, such as a server, desktop, operating system, file, storage, or network, is created through the process of virtualization. Virtualization’s main objective is to manage workloads by fundamentally altering conventional computing to make it more scalable. It is possible to think of threads and processes as ways to do multiple tasks simultaneously. We can construct (parts of) programmes that appear to run simultaneously thanks to them. Naturally, this simultaneous execution is fiction on a single-processor computer. A single thread or process will only have one instruction at a time because there is only one CPU. The illusion of parallelism is achieved by moving back and forth quickly between threads and processes.
Resource virtualization is a term used to describe the difference between having a single CPU and being able to pretend there are multiple CPUs.

The Role of Virtualization in Distributed Systems:

Every (distributed) computer system actually provides a programming interface to higher-level software, as can be seen in the below Fig (a). There are many various kinds of interfaces, from the fundamental instruction set that a CPU provides to the enormous selection of application programming interfaces that are included with many modern middleware systems. To emulate the behaviour of another system, virtualization essentially involves extending or replacing an existing interface, as seen in Fig (b).

Fig: (a) General organization between a program, interface, and system. (b) General organization of virtualizing system A on top of system B.

 

By effectively having each application run on its own virtual computer, possibly with the associated libraries and operating system, which in turn run on a common platform, virtualization can help minimise the diversity of platforms and machines. A high degree of portability and flexibility is offered by virtualization.

Architectures of Virtual Machines : 

Understanding the variations in virtualization requires an understanding of the common four types of interfaces that computer systems provide, at four different levels.

1. A point of contact between hardware and software, made up of commands that can be executed by any application.
2. A point of contact between hardware and software that is made up of machine instructions that are only accessible to privileged programmes, such as an operating system.
3. An operating system interface that is made up of system calls.
4. An interface made up of library calls, which often constitutes an API (application programming interface) (API).

The following figure illustrates these various categories. The goal of virtualization is to replicate the functionality of these interfaces.

Various interfaces offered by computer systems

 

Virtualization can happen in one of two ways. First, we can create a run-time system, which effectively offers an abstract instruction set for use in running programmes. Instructions can be mimicked, as is done when running Windows applications on UNIX platforms, or they can be interpreted, as is the case for the Java runtime environment. Keep in mind that in the latter scenario, the emulator will also need to replicate how system calls behave, which has historically proven to be anything but simple. This kind of virtualization emphasises that virtualization is primarily only done for a single process by referring to a process virtual machine.

An alternative method of virtualization is to offer a system that is effectively constructed as a layer that entirely shields the original hardware while exposing the entire instruction set of that hardware (or other hardware) as an interface. 


Like Article
Suggest improvement
Share your thoughts in the comments

Similar Reads