Open In App

Difference between Shared Nothing Architecture and Shared Disk Architecture

1. Shared Nothing Architecture : 
Shared nothing architecture is an architecture that is used in distributed computing in which each node is independent and different nodes are interconnected by a network. Every node is made of a processor, main memory, and disk. The main motive of this architecture is to remove the contention among nodes. Here the Nodes do not share memory or storage. The disks have individual nodes which cannot be shared. It works effectively in a high volume and read-write environment. 

2. Shared Disk Architecture : 
Shared Disk Architecture is an architecture that is used in distributed computing in which the nodes share the same disk devices but each node has its own private memory. The disks have active nodes which share memory in case of any failures. In this architecture, the disks are accessible from all the cluster nodes. This architecture has quick adaptability to the changing workloads. It uses robust optimization techniques. 

Difference between Shared Nothing Architecture and Shared Disk Architecture : 

S. No. Shared Nothing Architecture Shared Disk Architecture
1. In a shared-nothing architecture, the nodes do not share memory or storage. In shared disk architecture the nodes share the storage.
2. Here the disks have individual nodes which cannot be shared. Here the disks have active nodes which are shared in case of failures.
3. It has cheaper hardware as compared to shared disk architecture. The hardware of is shared disk is comparatively expensive.
4. The data is strictly partitioned. The data is not partitioned.
5. It has fixed load balancing. It has dynamic load balancing.
6. Scaling up in terms of capacity is easier. For getting more space, a new node can be added to the cluster.  Its clustering architecture, which makes use of a single disc device with distinct memories, can have its memory capacity increased by upgrading the memory.
7. Its advantage is that it has high availability. Its advantage is that it has unlimited scalability.
8.

Pros-

  • Easy to scale
  • reduces single points of failure, makes upgrades easier, and avoids downtime

Pros-

  • It can scale up to a fair amount of CPUs.
  • Each processor possess its own memory so the memory bus is not an obstruction.
  • Fault tolerance as the database is stored on discs that are accessible from all processors so in that case other processors can take over the task if one fails.
9.

Cons-

  • Deterioration in performance
  • Expensive

Cons-

  • No scalability in the architecture because the disc subsystem’s interconnection is currently the bottleneck.
  • Slower CPU to CPU communication because of passing through a communication network.
  • Slow down in the speed of current CPUs because of added more CPUs leads to the increased competition for memory access in network bandwidth.
Article Tags :