Open In App

Stages of Multi-threaded Architecture in OS

Last Updated : 10 May, 2023
Improve
Improve
Like Article
Like
Save
Share
Report

Prerequisite – Multi-threaded Architectures The implementation of threads in the multithreaded model is divided into various stages, each of which performs a unique function. The various execution stages of every thread and the relationship between every thread are shown as follows: 

1. Continuation Stage:

  • (i) Once a thread is initiated by its predecessor (or previous) thread, it starts executing its continuation stage. The important function of this stage is to calculate variables that are recurrence in nature. For Example – loop index variables needed to move the next thread. The values of these variables will be ahead to the next thread processing element just before the next thread is activated.
  • (ii) In the case of a DO loop, the index variables, such as x=x+1 or y=y->next, will be calculated and then forwarded to the next thread elements (processing element). The continuation stage of a thread ends with instruction of diverging (divide), which is the real cause for the next thread to initiate.

2. Target-Store-Address-Generation Stage:

  • (i) These threads can perform store operations that are later on concurrent threads and can be data-dependent. This stage will store operations and are referred to as target stores (TS).
  • (ii) Second most important work for this Stage is to reduce hardware complexity, most of the implementations of the multithreaded model doesn’t allow hypothesizing on data dependencies. To make run-time data dependence checking easier, the addresses of these target stores needs to be calculated as soon as possible (ASAP). The TSAG (target-store-address-generation) stage performs the address computation for these target stores. Further, these addresses are going stored in the memory buffer of each and every thread processing element and then are forwarded to the memory buffers of all the succeeding concurrent threads.
  • (iii) Once a thread completes the TSAG stage then all of the target store addresses have been forwarded, and then it sends the tsag-done flag to the successor thread. Then this flag will inform the next thread which can start the computation which is dependent on the previous threads. But before receiving the tsag_done flag, then a thread can only perform the computation that does not depend on any of the target stores of its active predecessor threads. But to increase the overlap between threads, the target-store-addresses-generation stage can be further divided into two parts. The first part is for the target store to addresses generations that do not have any data dependencies on earlier threads, which are computed quickly and then forwarded to the next thread. The second part computes unsafe target store addresses that may be data-dependent on an earlier thread. These computations must wait for the tsag_done flag from the predecessor thread before beginning.

3. Computation Stage:

  • (i) This stage performs the main computation of a thread known as the computation stage. If the addresses of the load operation then match that of the target store entry in its memory buffer during this stage, the thread can either read the data from the entry if it is available or further it will wait until the data is forwarded from an earlier concurrent thread. While on the other hand, if the value of the target store is calculated during the implementation of this stage, then the thread needs to forward the address and the data to the memory buffers for all of its concurrent successor threads.
  • (ii) The computation stage of a thread ends with a stop instruction.

4. Write-Back Stage:

  • (i) If the control dependencies are completely cleared after the stage of computation when the thread concluded (or completes) its execution by writing all the data from the operation that are stored in its memory buffer to memory, which actually includes data from both the targeted and the regular stores.
  • (ii) When the data from the store operations need to remain in the memory buffer till this write-back stage to secure the memory state from being changed by a hypothetical thread that is afterward terminate by an earlier simultaneous thread due to an erroneous control hypothesis.
  • (iii) For the maintenance of the correct memory state, simultaneous threads must have to perform their write-back stages in their indigenous order. That means a thread must wait for a wb_done flag from its previous thread before it can perform its write-back stage. It also needs to forward a wb_done flag to the next thread after it completes its own write-back stage. Because all of the stored data are committed thread-by-thread, write-after-read and write-after-write menace cannot occur during run-time.

Advantages of using Multi-threaded Architecture in Operating Systems:

  • Improved Responsiveness: Multiple threads allow for concurrent execution, enabling the system to respond to user interactions and handle multiple tasks simultaneously.
  • Enhanced Performance: Multi-threading enables parallel processing, utilizing multiple CPU cores to execute tasks concurrently, leading to improved performance and faster execution times.
  • Resource Efficiency: Threads within the same process can share memory and resources, reducing resource duplication and improving resource utilization.
  • Simplified Programming: Multi-threaded architectures provide abstractions that simplify the development of concurrent applications, allowing developers to focus on the logic of individual threads rather than managing complex inter-process communication.

Disadvantages of using Multi-threaded Architecture in Operating Systems:

  • Complexity: Multi-threaded programming introduces complexity in terms of synchronization, coordination, and potential race conditions. It requires careful handling of shared resources and can be challenging to debug and troubleshoot.
  • Scalability Issues: As the number of threads increases, the overhead associated with thread creation, context switching, and synchronization may impact performance and scalability. Poorly designed or excessive threads can lead to diminishing returns or even performance degradation.
  • Concurrency Issues: Concurrent execution of threads can introduce unpredictable behavior, such as deadlocks, livelocks, and priority inversion. Ensuring thread safety and managing synchronization can be difficult, requiring careful design and testing.
  • Increased Memory Usage: Each thread requires its own stack and resources, which can result in increased memory usage compared to single-threaded applications. This can be a concern in memory-constrained environments.

Similar Reads

Introduction to Multi-threaded Architectures and Systems in OS
According to me, multithreaded architecture is actually a trend and reliable and easy to applicable solution for upcoming microprocessor design, so I studied four research papers on this topic just to familiarize myself with the technology involved in this subject. In today’s world, there is a rapid progress of Very Large Scale Integrated circuit(V
5 min read
Computer Organization and Architecture | Pipelining | Set 1 (Execution, Stages and Throughput)
To improve the performance of a CPU we have two options: 1) Improve the hardware by introducing faster circuits. 2) Arrange the hardware such that more than one operation can be performed at the same time. Since there is a limit on the speed of hardware and the cost of faster circuits is quite high, we have to adopt the 2nd option. Pipelining is a
6 min read
Difference between Multi-tasking and Multi-threading
Introduction : Multi-tasking and multi-threading are two techniques used in operating systems to manage multiple processes and tasks. Multi-tasking is the ability of an operating system to run multiple processes or tasks concurrently, sharing the same processor and other resources. In multi-tasking, the operating system divides the CPU time between
4 min read
Multi-Cycle Data path and Control
Overview : Multi-cycle data path break up instructions into separate steps. It reduces average instruction time. Each step takes a single clock cycle Each functional unit can be used more than once in an instruction, as long as it is used in different clock cycles. It reduces the amount of hardware needed. (I) Fetch Instruction : An instruction sto
3 min read
Two Level Paging and Multi Level Paging in OS
Paging is the process in which we convert the entire process into equal-sized pages. Each page further consists of a fixed number of words (if it is word addressable). The Pages are represented by the Virtual Address generated by the CPU. These Pages are mapped to the Physical Address by the MMU. So, to help in this mapping we use the concept of Pa
3 min read
What is SMP (Symmetric Multi-Processing)?
Multiprocessing(MP), involves computer hardware and software architecture where there are multiple(two or more) processing units executing programs for the single operating(computer) system. SMP i.e. symmetric multiprocessing, refers to the computer architecture where multiple identical processors are interconnected to a single shared main memory,
2 min read
Difference between Process Image and Multi Thread Process image
1. Process Image : Process image is an executable file required during the execution of any process. It consists of several segments related to the execution of the process. Following are the contents of the process image - 1. Process Control Block 2. Stack 3. Data 4. Code 2. Multi Thread Process Image : Multi thread process image is an executable
2 min read
Difference between Multi Level Queue (MLQ) Scheduling and Round Robin (RR) algorithms
1. Multi Level Queue Scheduling (MLQ) : It is quite difficult to have just one queue and schedule all the processes. This is where multi-level queue scheduling is used. In this the processes are divided into various classes depending upon the property of the processes such as system process, I/O process, etc. Thus we get 'n' number of queues for n
3 min read
Difference between Multi Level Queue Scheduling (MLQ) and First Come First Served (FCFS)
1. Multi Level Queue Scheduling (MLQ) : It is quite difficult to have just one queue and schedule all the processes. This is where multi level queue scheduling is used. In this the processes are divided into various classes depending upon the property of the processes such as system process, I/O process etc. Thus we get 'n' number of queues for n c
3 min read
Difference between Multi Level Queue Scheduling (MLQ) and Shortest Job First
1. Multi Level Queue Scheduling (MLQ) : It is quite difficult to have just one queue and schedule all the processes. This is where multi level queue scheduling is used. In this the processes are divided into various classes depending upon the property of the processes such as system process, I/O process etc. Thus we get 'n' number of queues for n c
3 min read