Open In App

Critical Section in Synchronization

Improve
Improve
Improve
Like Article
Like
Save Article
Save
Share
Report issue
Report

In computer science, a critical section refers to a segment of code that is executed by multiple concurrent threads or processes, and which accesses shared resources. These resources may include shared memory, files, or other system resources that can only be accessed by one thread or process at a time to avoid data inconsistency or race conditions.

  1. The critical section must be executed as an atomic operation, which means that once one thread or process has entered the critical section, all other threads or processes must wait until the executing thread or process exits the critical section. The purpose of synchronization mechanisms is to ensure that only one thread or process can execute the critical section at a time.
  2. The concept of a critical section is central to synchronization in computer systems, as it is necessary to ensure that multiple threads or processes can execute concurrently without interfering with each other. Various synchronization mechanisms such as semaphores, mutexes, monitors, and condition variables are used to implement critical sections and ensure that shared resources are accessed in a mutually exclusive manner.

The use of critical sections in synchronization can be advantageous in improving the performance of concurrent systems, as it allows multiple threads or processes to work together without interfering with each other. However, care must be taken in designing and implementing critical sections, as incorrect synchronization can lead to race conditions and deadlocks.

Critical Section

When more than one processes try to access the same code segment that segment is known as the critical section. The critical section contains shared variables or resources that need to be synchronized to maintain the consistency of data variables.

 In simple terms, a critical section is a group of instructions/statements or regions of code that need to be executed atomically (read this post for atomicity), such as accessing a resource (file, input or output port, global data, etc..) In concurrent programming, if one thread tries to change the value of shared data at the same time as another thread tries to read the value (i.e., data race across threads), the result is unpredictable. The access to such shared variables (shared memory, shared files, shared port, etc.) is to be synchronized. 

Few programming languages have built-in support for synchronization. It is critical to understand the importance of race conditions while writing kernel-mode programming (a device driver, kernel thread, etc.) since the programmer can directly access and modify kernel data structures

Although there are some properties that should be followed if any code in the critical section

  1. Mutual Exclusion: If process Pi is executing in its critical section, then no other processes can be executing in their critical sections.
  2. Progress: If no process is executing in its critical section and some processes wish to enter their critical sections, then only those processes that are not executing in their remainder sections can participate in deciding which will enter its critical section next, and this selection cannot be postponed indefinitely.
  3. Bounded Waiting: There exists a bound, or limit, on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted.

Two general approaches are used to handle critical sections:

  1. Preemptive kernels: A preemptive kernel allows a process to be preempted while it is running in kernel mode.
  2. Non preemptive kernels: A non preemptive kernel does not allow a process running in kernel mode to be preempted; a kernel-mode process will run until it exists in kernel mode, blocks, or voluntarily yields control of the CPU. A non preemptive kernel is essentially free from race conditions on kernel data structures, as only one process is active in the kernel at a time.

Critical Section Problem

The use of critical sections in a program can cause a number of issues, including:

Deadlock: When two or more threads or processes wait for each other to release a critical section, it can result in a deadlock situation in which none of the threads or processes can move. Deadlocks can be difficult to detect and resolve, and they can have a significant impact on a program’s performance and reliability.

Starvation: When a thread or process is repeatedly prevented from entering a critical section, it can result in starvation, in which the thread or process is unable to progress. This can happen if the critical section is held for an unusually long period of time, or if a high-priority thread or process is always given priority when entering the critical section.

Overhead: When using critical sections, threads or processes must acquire and release locks or semaphores, which can take time and resources. This may reduce the program’s overall performance.

critical section

Critical section

It could be visualized using the pseudo-code below –

do{
    flag=1;
    while(flag); // (entry section)
        // critical section
    if (!flag)
        // remainder section
} while(true);

A simple solution to the critical section can be thought of as shown below,

acquireLock();
Process Critical Section
releaseLock();

A thread must acquire a lock prior to executing a critical section. The lock can be acquired by only one thread. There are various ways to implement locks in the above pseudo-code. Let us discuss them in future articles.

Strategies for avoiding problems: While deadlocks, starvation, and overhead are mentioned as potential issues, there are more specific strategies for avoiding or mitigating these problems. For example, using timeouts to prevent deadlocks, implementing priority inheritance to prevent priority inversion and starvation, or optimizing lock implementation to reduce overhead.

Examples of critical sections in real-world applications: While the article describes critical sections in a general sense, it could be useful to provide examples of how critical sections are used in specific real-world applications, such as database management systems or web servers.

Impact on scalability: The use of critical sections can impact the scalability of a program, particularly in distributed systems where multiple nodes are accessing shared resources.

In process synchronization, a critical section is a section of code that accesses shared resources such as variables or data structures, and which must be executed by only one process at a time to avoid race conditions and other synchronization-related issues.

A critical section can be any section of code where shared resources are accessed, and it typically consists of two parts: the entry section and the exit section. The entry section is where a process requests access to the critical section, and the exit section is where it releases the resources and exits the critical section.

To ensure that only one process can execute the critical section at a time, process synchronization mechanisms such as semaphores and mutexes are used. A semaphore is a variable that is used to indicate whether a resource is available or not, while a mutex is a binary semaphore that provides mutual exclusion to shared resources.

When a process enters a critical section, it must first request access to the semaphore or mutex associated with the critical section. If the resource is available, the process can proceed to execute the critical section. If the resource is not available, the process must wait until it is released by the process currently executing the critical section.

Once the process has finished executing the critical section, it releases the semaphore or mutex, allowing another process to enter the critical section if necessary.

Proper use of critical sections and process synchronization mechanisms is essential in concurrent programming to ensure proper synchronization of shared resources and avoid race conditions, deadlocks, and other synchronization-related issues.

Advantages of critical section in process synchronization:

  1. Prevents race conditions: By ensuring that only one process can execute the critical section at a time, race conditions are prevented, ensuring consistency of shared data.
  2. Provides mutual exclusion: Critical sections provide mutual exclusion to shared resources, preventing multiple processes from accessing the same resource simultaneously and causing synchronization-related issues.
  3. Reduces CPU utilization: By allowing processes to wait without wasting CPU cycles, critical sections can reduce CPU utilization, improving overall system efficiency.
  4. Simplifies synchronization: Critical sections simplify the synchronization of shared resources, as only one process can access the resource at a time, eliminating the need for more complex synchronization mechanisms.

Disadvantages of critical section in process synchronization:

  1. Overhead: Implementing critical sections using synchronization mechanisms like semaphores and mutexes can introduce additional overhead, slowing down program execution.
  2. Deadlocks: Poorly implemented critical sections can lead to deadlocks, where multiple processes are waiting indefinitely for each other to release resources.
  3. Can limit parallelism: If critical sections are too large or are executed frequently, they can limit the degree of parallelism in a program, reducing its overall performance.
  4. Can cause contention: If multiple processes frequently access the same critical section, contention for the critical section can occur, reducing performance.
  5. Overall, critical sections are a useful tool in process synchronization to ensure proper synchronization of shared resources and prevent race conditions. However, they can also introduce additional overhead and can be prone to synchronization-related issues if not implemented correctly.

Important points related to critical section in process synchronization are:

  1. Understanding the concept of critical section and why it’s important for synchronization.
  2. Familiarity with the different synchronization mechanisms used to implement critical sections, such as semaphores, mutexes, and monitors.
  3. Knowledge of common synchronization problems that can arise in critical sections, such as race conditions, deadlocks, and livelocks.
  4. Understanding how to design and implement critical sections to ensure proper synchronization of shared resources and prevent synchronization-related issues.
  5. Familiarity with best practices for using critical sections in concurrent programming.

Some reference books that cover critical section and related topics are:

  1. “Operating System Concepts” by Silberschatz, Galvin, and Gagne.
  2. “Modern Operating Systems” by Andrew S. Tanenbaum.
  3. “Advanced Programming in the UNIX Environment” by W. Richard Stevens and Stephen A. Rago.
  4. “The Art of Multiprocessor Programming” by Maurice Herlihy and Nir Shavit.
  5. “Concurrent Programming in Java: Design Principles and Patterns” by Doug Lea.

These books cover the fundamentals of critical section and process synchronization, including the different synchronization mechanisms used to implement critical sections and common synchronization problems that can arise. They also provide practical examples and code snippets to illustrate the concepts and best practices for using critical sections in concurrent programming.



Last Updated : 03 Oct, 2023
Like Article
Save Article
Previous
Next
Share your thoughts in the comments
Similar Reads