Open In App

Threads and its types in Operating System

Improve
Improve
Improve
Like Article
Like
Save Article
Save
Share
Report issue
Report

Thread is a single sequence stream within a process. Threads have same properties as of the process so they are called as light weight processes. Threads are executed one after another but gives the illusion as if they are executing in parallel. Each thread has different states. Each thread has  

  1. A program counter  
  2. A register set  
  3. A stack space  

Threads are not independent of each other as they share the code, data, OS resources etc.  

Similarity between Threads and Processes –  

  • Only one thread or process is active at a time 
  • Within process both execute sequential
  • Both can create children 
  • Both can be scheduled by the operating system: Both threads and processes can be scheduled by the operating system to execute on the CPU. The operating system is responsible for assigning CPU time to the threads and processes based on various scheduling algorithms.
  • Both have their own execution context: Each thread and process has its own execution context, which includes its own register set, program counter, and stack. This allows each thread or process to execute independently and make progress without interfering with other threads or processes.
  • Both can communicate with each other: Threads and processes can communicate with each other using various inter-process communication (IPC) mechanisms such as shared memory, message queues, and pipes. This allows threads and processes to share data and coordinate their activities.
  • Both can be preempted: Threads and processes can be preempted by the operating system, which means that their execution can be interrupted at any time. This allows the operating system to switch to another thread or process that needs to execute.
  • Both can be terminated: Threads and processes can be terminated by the operating system or by other threads or processes. When a thread or process is terminated, all of its resources, including its execution context, are freed up and made available to other threads or processes.

Differences between Threads and Processes –  

  • Resources: Processes have their own address space and resources, such as memory and file handles, whereas threads share memory and resources with the program that created them.
  • Scheduling: Processes are scheduled to use the processor by the operating system, whereas threads are scheduled to use the processor by the operating system or the program itself.
  • Creation: The operating system creates and manages processes, whereas the program or the operating system creates and manages threads.
  • Communication: Because processes are isolated from one another and must rely on inter-process communication mechanisms, they generally have more difficulty communicating with one another than threads do. Threads, on the other hand, can interact with other threads within the same programme directly.

Threads, in general, are lighter than processes and are better suited for concurrent execution within a single programme. Processes are commonly used to run separate programmes or to isolate resources between programmes.

Types of Threads: 

  1. User Level thread (ULT) – Is implemented in the user level library, they are not created using the system calls. Thread switching does not need to call OS and to cause interrupt to Kernel. Kernel doesn’t know about the user level thread and manages them as if they were single-threaded processes. 
    • Advantages of ULT –
      • Can be implemented on an OS that doesn’t support multithreading.
      • Simple representation since thread has only program counter, register set, stack space.
      • Simple to create since no intervention of kernel.
      • Thread switching is fast since no OS calls need to be made. 
    • Limitations of ULT –
      • No or less co-ordination among the threads and Kernel.
      • If one thread causes a page fault, the entire process blocks.
  2. Kernel Level Thread (KLT) – Kernel knows and manages the threads. Instead of thread table in each process, the kernel itself has thread table (a master one) that keeps track of all the threads in the system. In addition kernel also maintains the traditional process table to keep track of the processes. OS kernel provides system call to create and manage threads.
    • Advantages of KLT –
      • Since kernel has full knowledge about the threads in the system, scheduler may decide to give more time to processes having large number of threads.
      • Good for applications that frequently block.
    • Limitations of KLT –
      • Slow and inefficient.
      • It requires thread control block so it is an overhead.

Advantages of Threading:

  • Responsiveness: A multithreaded application increases responsiveness to the user.
  • Resource Sharing: Resources like code and data are shared between threads, thus allowing a multithreaded application to have several threads of activity within the same address space.
  • Increased concurrency: Threads may be running parallelly on different processors, increasing concurrency in a multiprocessor machine.
  • Lesser cost: It costs less to create and context-switch threads than processes.
  • Lesser context-switch time: Threads take lesser context-switch time than processes.

Threading Issues:

  1. The fork() and exec() System Calls: The semantics of the fork() and exec() system calls change in a multithreaded program. If one thread in a program calls fork(), does the new process duplicate all threads, or is the new process single-threaded? Some UNIX systems have chosen to have two versions of fork(), one that duplicates all threads and another that duplicates only the thread that invoked the fork() system call. The exec() system , That is, if a thread invokes the exec() system call, the program specified in the parameter to exec() will replace the entire process—including all threads.
  2. Signal Handling: A signal is used in UNIX systems to notify a process that a particular event has occurred. A signal may be received either synchronously or asynchronously depending on the source of and the reason for the event being signaled. All signals, whether synchronous or asynchronous, follow the same pattern:1. A signal is generated by the occurrence of a particular event.2. The signal is delivered to a process.3. Once delivered, the signal must be handled. A signal may be handled by one of two possible handlers: 1. A default signal handler .2. A user-defined signal handler. Every signal has a default signal handler that the kernel runs when handling that signal. This default action can be overridden by a user-defined signal handler that is called to handle the signal.
  3. Thread Cancellation: Thread cancellation involves terminating a thread before it has completed. For example, if multiple threads are concurrently searching through a database and one thread returns the result, the remaining threads might be canceled. Another situation might occur when a user presses a button on a web browser that stops a web page from loading any further. Often, a web page loads using several threads—each image is loaded in a separate thread. When a user presses the stop button on the browser, all threads loading the page are canceled. A thread that is to be canceled is often referred to as the target thread. Cancellation of a target thread may occur in two different scenarios:1. Asynchronous cancellation. One thread immediately terminates the target thread.2. Deferred cancellation. The target thread periodically checks whether it should terminate, allowing it an opportunity to terminate itself in an orderly fashion.
  4. Thread-Local Storage: Threads belonging to a process share the data of the process. Indeed, this data sharing provides one of the benefits of multithreaded programming. However, in some circumstances, each thread might need its own copy of certain data. We will call such data thread-local storage (or TLS.) For example, in a transaction-processing system, we might service each transaction in a separate thread. Furthermore, each transaction might be assigned a unique identifier. To associate each thread with its unique identifier, we could use thread-local storage.
  5. Scheduler Activations: One scheme for communication between the user-thread library and the kernel is known as scheduler activation. It works as follows: The kernel provides an application with a set of virtual processors (LWPs), and the application can schedule user threads onto an available virtual processor

Summary: 

  1. Each ULT has a process that keeps track of the thread using the Thread table.
  2. Each KLT has Thread Table (TCB) as well as the Process Table (PCB).


Last Updated : 11 Feb, 2024
Like Article
Save Article
Previous
Next
Share your thoughts in the comments
Similar Reads