Open In App

Threading Issues

Last Updated : 27 Feb, 2024
Improve
Improve
Like Article
Like
Save
Share
Report

Multitasking is one of the key methods, which can be applied to improve the performance of operations. In a multithreading environment, there are many threading-related problems. We shall discuss in this article threading problems associated with system calls, cancellation of threads, signal handling, thread pools, and thread-specific data.

In addition to threading problems, we shall discuss ways in which they can be handled for continued use of benefits associated with a multithreaded programming environment.

Thread in an Operating System

The thread refers to one string that offers the functionality of cutting a program into multiple jobs performing concurrently parallelly. A thread refers to the smallest unit of computation and comprises a program counter, a register set, and a stack space.

Within a process, a single sequence stream. It is part of a process that executes independently of other processes. Light-weight process. This is used in achieving parallelism by dividing a process’s tasks that are independent paths of execution.

For instance, multiple tabs, a browser, text editor (Spell checking, and formatting of text occur simultaneously with the type and saving of the text by various threads).

Light-Weight-Process-(LWP)

Light-Weight Process (LWP)

What is Multithreading in OS?

Threading refers to a way in which an application traverses whenever running. Multi-Threading makes it possible for any program or operating system process to conduct two or more threads running concurrently. For each user who calls up for a specific program or system service, the program assumes a distinct identification in the form of a thread.

It is also referred to Lightweight Process, a method used in enhancing app speed. It enhances any operation system by minimizing bloated-up threads on its performance. Each thread is associated with a process and there are no threads that do not have processes.

These are also employed for running network servers as well as Web servers. In the multiprocessors this shared memory serves as a good base for execution of any program in a parallel way. Threading is being used extensively nowadays even when transacting and surfing the internet.

The threading section threads codes in very small light pieces so that the CPU loads are distributed equally to allow quick completion of any function. Every thread has its own separate resources by means of which we can perform any process on its own, or increase the amount of threads to make multiple processes work in parallel.

What is Race Condition?

Race condition refers to an OS problem where two or more processes or threads execute concurrently. Their order of execution determines whether they will be killed. In case of race, the precise moments at which events occur cannot be specified, and the product of execution can change according to time. This can lead to some undesirable or erroneous operation of such a system.

What is Process Synchronization?

This ensures that such a synchronization between the installed operating system is aligned with the processor present on the computer system. This implies that, if the next stage of processing requires printing; then the processor will send a command to the operating system and subsequently waits for a response that either prints or fails to print.

The operating system detects it as soon as it is printed and the reply is communicated to the processor which implies that synchronization of these parties are required.

What is Mutex (Mutual Exclusion)?

The term “mutual exclusion” (or MUTEX), which stems from the property of “critical section” refers to a situation where two threads/processes wish to access shared resources simultaneously. However, this issue comes up when a shared resource is unsecured and two or more threads or processes try to use it at once resulting in race conditions, data inconsistencies, and so on.

What is Deadlock?

Two computer processes have access to single resource and neither process can access to this resource. so, deadlock occurs. Here we mean by saying deadlock, a situation wherein two processes crave for the same resource and each of them cannot get access to that very resource leading to deadlock in the existing system.

Threading Issues in OS

  • System Call
  • Thread Cancellation
  • Signal Handling
  • Thread Pool
  • Thread Specific Data
Threading-Issues

Threading Issues

fork() and exec() System Calls

They are the system calls fork() and exec(). Fork() function gives rise to an identical copy of process which initiated fork call. The new duplication process is referred to as a child process, while the invoker process is identified by fork(). The instruction after fork continues the execution with both the parent process and the child process.

Discussing fork() system call, therefore. Let us assume that one of the threads belonging to a multi-threaded program has instigated a fork() call. Therefore, the new process is a duplication of fork(). Here, the question is as follows; will the new duplicate process made by fork() be multi-threaded like all threads of the old process or it will be a unique thread?

Now, certain UNIX systems have two variants of fork(). fork can either duplicate all threads of the parent process to a child process or just those that were invoked by the parent process. The application will determine which version of fork() to use.

When the next system call, namely exec() system call is issued, it replaces the whole programming with all its threads by the program specified in the exec() system call’s parameters. Ordinarily, the exec() system call goes into queue after the fork() system call.

However, this implies that the exec() system call should not be queued immediately after the fork() system call because duplicating all the threads of the parent process into the child process will be superfluous. Since the exec() system call will overwrite the whole process with the one given in the arguments passed to exec().

This means that in cases like this; a fork() which only replicates one invoking thread will do.

Thread Cancellation

The process of prematurely aborting an active thread during its run is called ‘thread cancellation’. So, let’s take a look at an example to make sense of it. Suppose, there is a multithreaded program whose several threads have been given the right to scan a database for some information. The other threads however will get canceled once one of the threads happens to return with the necessary results.

The target thread is now the thread that we want to cancel. Thread cancellation can be done in two ways:

  • Asynchronous Cancellation: The asynchronous cancellation involves only one thread that cancels the target thread immediately.
  • Deferred Cancellation: In the case of deferred cancellation, the target thread checks itself repeatedly until it is able to cancel itself voluntarily or decide otherwise.

The issue related to the target thread is listed below:

How is it managed when resources are assigned to a canceled target thread?

Suppose the target thread exits when updating the information that is being shared with other threads.

However, in here, asynchronous threads cancellation whereas thread cancels its target thread irrespective of whether it owns any resource is a problem.

On the other hand, the target thread receives this message first and then checks its flag to see if it should cancel itself now or later. They are called the Cancellation Points of threads under which thread cancellation occurs safely.

Signal Handling

Signal is easily directed at the process in single threaded applications. However, in relation to multithreaded programs, the question is which thread of a program the signal should be sent.

Suppose the signal will be delivered to:

  • Every line of this process.
  • Some special thread of a process.
  • thread to which it applies

Alternatively, you could give one thread the job of receiving all signals.

So, the way in which the signal shall be passed to the thread depends on how the signal has been created. The generated signals can be classified into two types: synchronous signal and asynchronous signal.

At this stage, the synchronous signals are routed just like any other signal was generated. Since these signals are triggered by events outside of the running process, they are received by the running process in an asynchronous manner, referred to as asynchronous signals.

Therefore, if the signal is synchronous, it will be sent to a thread that generated such a signal. The asynchronous signal cannot be determined into which thread in a multithreaded program delivery it should go. The asynchronous signal that is telling a process to stop, will result in all threads of the process receiving the signal.

Many UNIX UNITS have addressed, to some extent, the problem of asynchronous signals. Here, the thread is given an opportunity to identify the relevant or valid signals and those that it does not support. Windows OS on the other hand, has no idea about signals but does use ACP as equivalent for asynchronous signals adopted in Unix platforms.

In contrast with UNIX where a thread specifies that it can or cannot receive a thread, all control process instances (ACP) are sent to a particular thread.

Thread Pool

The server develops an independent thread every time an individual attempts to access a page on it. However, the server also has certain challenges. Bear in mind that no limit in the number of active threads in the system will lead to exhaustion of the available system resources because we will create a new thread for each request.

The establishment of a fresh thread is another thing that worries us. The creation of a new thread should not take more than the amount of time used up by the thread in dealing with the request and quitting after because this will be wasted CPU resources.

Hence, thread pool could be the remedy for this challenge. The notion is that as many fewer threads as possible are established during the beginning of the process. A group of threads that forms this collection of threads is referred as a thread pool. There are always threads that stay on the thread pool waiting for an assigned request to service.

Thread-Pool

Thread Pool

A new thread is spawned from the pool every time an incoming request reaches the server, which then takes care of the said request. Having performed its duty, it goes back to the pool and awaits its second order.

Whenever the server receives the request, and fails to identify a specific thread at the ready thread pool, it may only have to wait until some of the threads are available at the ready thread pool. It is better than starting a new thread whenever a request arises because this system works well with machines that cannot support multiple threads at once.

Thread Specific Data

Of course, we all know that a thread belongs to data of one and the same process, right?. The challenge here will be when every thread in that process must have its own copy of the same data. Consequently, any data uniquely related to a particular thread is referred to as thread-specific data.

For example, a transaction processing system can process a transaction in individual threads for each one.I Each transaction we perform shall be assigned with a special identifier which in turn, shall uniquely identify that particular transaction to us. The system would then be in a position to distinguish every transaction among others.

Because we render each transaction separately on a thread. In this way, thread-specific datas will allow associating every thread with a definite transaction and some transaction ID. For example, libraries that support threads, namely Win32, Pthreads and Java, provide for thread-specific data (TSD).

Hence, these are the threading problems that arise in multithreaded programming environments. Additionally, we examine possible ways of addressing these concerns.

As a result, multithreading may be regarded as an integral operation within computer programming enhancing task concurrency and improved system performance. however, it is also associated with some problems and threading issues. There were numerous threading concerns such as system calls, thread cancellation, threads’ pools and thread specific date.

FAQs on Threading Issues

1. What is a race condition in multithreading programming?

A race condition refers to an issue that arises in an OS when two or more processes or threads execute concurrently.

2. what is the use of thread pool in multi-threading?

Thread pool is an effective management technique that prevents thread exhaustion from overuse of resources, thereby reducing the outflow of new threads in the system.

3. What’s there with thread-specific data and importance of using it in multithreading?

Thread specific data allows each thread to possess a separate copy of data for the purpose of ensuring isolated data and independent thread.



Like Article
Suggest improvement
Share your thoughts in the comments

Similar Reads