Open In App

Introduction of Process Synchronization

Process Synchronization is the coordination of execution of multiple processes in a multi-process system to ensure that they access shared resources in a controlled and predictable manner. It aims to resolve the problem of race conditions and other synchronization issues in a concurrent system.

The main objective of process synchronization is to ensure that multiple processes access shared resources without interfering with each other and to prevent the possibility of inconsistent data due to concurrent access. To achieve this, various synchronization techniques such as semaphores, monitors, and critical sections are used.



In a multi-process system, synchronization is necessary to ensure data consistency and integrity, and to avoid the risk of deadlocks and other synchronization problems. Process synchronization is an important aspect of modern operating systems, and it plays a crucial role in ensuring the correct and efficient functioning of multi-process systems.

On the basis of synchronization, processes are categorized as one of the following two types:



Process synchronization problem arises in the case of Cooperative processes also because resources are shared in Cooperative processes.   

Race Condition

When more than one process is executing the same code or accessing the same memory or any shared variable in that condition there is a possibility that the output or the value of the shared variable is wrong so for that all the processes doing the race to say that my output is correct this condition known as a race condition. Several processes access and process the manipulations over the same data concurrently, and then the outcome depends on the particular order in which the access takes place. A race condition is a situation that may occur inside a critical section. This happens when the result of multiple thread execution in the critical section differs according to the order in which the threads execute. Race conditions in critical sections can be avoided if the critical section is treated as an atomic instruction. Also, proper thread synchronization using locks or atomic variables can prevent race conditions.   

Example:

Let’s understand one example to understand the race condition better:

Let’s say there are two processes P1 and P2 which share a common variable (shared=10), both processes are present in – queue and waiting for their turn to be executed. Suppose, Process P1 first come under execution, and the CPU store a common variable between them (shared=10) in the local variable (X=10) and increment it by 1(X=11), after then when the CPU read line sleep(1),it switches from current process P1 to process P2 present in ready-queue. The process P1 goes in a waiting state for 1 second.

Now CPU execute the Process P2 line by line and store common variable (Shared=10) in its local variable (Y=10) and decrement Y by 1(Y=9), after then when CPU read sleep(1), the current process P2 goes in waiting for state and CPU remains idle for some time as there is no process in ready-queue, after completion of 1 second of process P1 when it comes in ready-queue, CPU takes the process P1 under execution and execute the remaining line of code (store the local variable (X=11) in common variable (shared=11) ), CPU remain idle for sometime waiting for any process in ready-queue,after completion of 1 second of Process P2, when process P2 comes in ready-queue, CPU start executing the further remaining line of Process P2(store the local variable (Y=9) in common variable (shared=9) ).

Initially Shared = 10

Process 1

Process 2

int X = shared

int Y = shared

X++

Y–

sleep(1)

sleep(1)

shared = X

shared = Y

Note: We are assuming the final value of a common variable(shared) after execution of Process P1 and Process P2 is 10 (as Process P1 increment variable (shared=10) by 1 and Process P2 decrement variable (shared=11) by 1 and finally it becomes shared=10). But we are getting undesired value due to a lack of proper synchronization.

Actual meaning of race-condition

Critical Section Problem

A critical section is a code segment that can be accessed by only one process at a time. The critical section contains shared variables that need to be synchronized to maintain the consistency of data variables. So the critical section problem means designing a way for cooperative processes to access shared resources without creating data inconsistencies. 

In the entry section, the process requests for entry in the Critical Section.

Any solution to the critical section problem must satisfy three requirements:

Peterson’s Solution

Peterson’s Solution is a classical software-based solution to the critical section problem. In Peterson’s solution, we have two shared variables:

 Peterson’s Solution preserves all three conditions:

Disadvantages of Peterson’s Solution

Semaphores

A semaphore is a signaling mechanism and a thread that is waiting on a semaphore can be signaled by another thread. This is different than a mutex as the mutex can be signaled only by the thread that is called the wait function.

A semaphore uses two atomic operations, wait and signal for process synchronization.
A Semaphore is an integer variable, which can be accessed only through two operations wait() and signal().
There are two types of semaphores: Binary Semaphores and Counting Semaphores.

Advantages of Process Synchronization

Disadvantages of Process Synchronization

FAQs on Process Synchronization

1. What is the main objective of process synchronization in a multi-process system?

A multi-process system’s process synchronization goal is to govern and predict shared resource access. It prevents race situations and other synchronization concerns to maintain data integrity and avoid deadlocks.

2. What are the key requirements that any solution to the critical section problem must satisfy?

The crucial section problem solution must meet three criteria:

  • Mutual Exclusion: Only one process can run in its critical section.
  • Progress: Only processes not in their remainder sections can select the next process to enter the critical section if no process is in it and others are waiting.
  • Bounded Waiting: The number of times a process can enter its crucial phase after making a request must be limited before it is granted.

Conclusion

Concurrent computing requires process synchronization to coordinate numerous processes in a multi-process system to regulate and forecast resource access. It addresses race situations and data inconsistency, essential for data integrity. Semaphores and Peterson’s solution are used for synchronization. Synchronization is necessary for data consistency but adds complexity and performance overheads, making correct implementation and management vital for multi-process systems.


Article Tags :