Skip to content
Related Articles

Related Articles

Grid Computing
  • Difficulty Level : Medium
  • Last Updated : 10 Oct, 2019

Grid Computing can be defined as a network of computers working together to perform a task that would rather be difficult for a single machine. All machines on that network work under the same protocol to act like a virtual supercomputer. The task that they work on may include analysing huge datasets or simulating situations which require high computing power. Computers on the network contribute resources like processing power and storage capacity to the network.

Grid Computing is a subset of distributed computing, where a virtual super computer comprises of machines on a network connected by some bus, mostly Ethernet or sometimes the Internet. It can also be seen as a form of Parallel Computing where instead of many CPU cores on a single machine, it contains multiple cores spread across various locations. The concept of grid computing isn’t new, but it is not yet perfected as there are no standard rules and protocols established and accepted by people.

A Grid computing network mainly consists of these three types of machines

  1. Control Node:
    A computer, usually a server or a group of servers which administrates the whole network and keeps the account of the resources in the network pool.
  2. Provider:
    The computer which contributes it’s resources in the network resource pool.
  3. User:
    The computer that uses the resources on the network.

When a computer makes a request for resources to the control node, control node gives the user access to the resources available on the network. When it is not in use it should ideally contribute it’s resources to the network. Hence a normal computer on the node can swing in between being a user or a provider based on it’s needs. The nodes may consist of machines with similar platforms using same OS called homogenous networks, else machines with different platforms running on various different OS called heterogenous networks. This is the distinguishing part of grid computing from other distributed computing architectures.

For controlling the network and it’s resources a software/networking protocol is used generaly known as Middleware. This is responsible for administrating the network and the control nodes are merely it’s executors. As a grid computing system should use only unused resources of a computer, it is the job of the control node that any provider is not overloaded with tasks.

Another job of the middleware is to authorize any process that is being executed on the network. In a grid computing system, a provider gives permission to the user to run anything on it’s computer, hence it is a huge security threat for the network. Hence a middleware should ensure that there is no unwanted task being executed on the network.

The meaning of the term Grid Computing has changed over the years, according to “The Grid: Blueprint for a new computing infrastructure” by Ian Foster and Carl Kesselman published in 1999, the idea was to consume computing power like electricity is consumed from a power grid. This idea is similar to current concept of cloud computing, whereas now grid computing is viewed as a distributed collaborative network.Currently grid computing is being used in various institutions to solve a lot of mathematical, analytical and physics problems.

Advantages of Grid Computing:

  1. It is not centralized, as there are no servers required, except the control node which is just used for controlling and not for processing.
  2. Multiple heterogenous machines i.e. machines with different Operating Systems can use a single grid computing network.
  3. Tasks can be performed parallely accross various physical locations and the users don’t have to pay for it(with money).
My Personal Notes arrow_drop_up
Recommended Articles
Page :