Open In App

Thread Pool in C++

The Thread Pool in C++ is used to manage and efficiently resort to a group (or pool) of threads. Instead of creating threads again and again for each task and then later destroying them, what a thread pool does is it maintains a set of pre-created threads now these threads can be reused again to do many tasks concurrently. By using this approach we can minimize the overhead that costs us due to the creation and destruction of threads. This makes our application more efficient.

There is no in-built library in C++ that provides the thread pool, so we need to create the thread pool manually according to our needs.



What is Thread Pool?

A group of worker threads that are established at the program start and stored in a pool to be used at a later time are called thread pools. The Thread Pool effectively maintains and allocates existing threads to do several tasks concurrently, saving time compared to starting a new thread for each activity.

Need of Thread Pool in C++

In C++, a thread pool is needed in the following cases:



Example of Thread Pool in C++

Below is a straightforward C++ example of a thread pool. The implementation manages a pool of worker threads and a queue of tasks using thread, queue, mutex, and condition_variable. Every worker thread does tasks once it has continually waited for them to be enqueued.




// C++ Program to demonstrate thread pooling
  
#include <condition_variable>
#include <functional>
#include <iostream>
#include <mutex>
#include <queue>
#include <thread>
using namespace std;
  
// Class that represents a simple thread pool
class ThreadPool {
public:
    // // Constructor to creates a thread pool with given
    // number of threads
    ThreadPool(size_t num_threads
               = thread::hardware_concurrency())
    {
  
        // Creating worker threads
        for (size_t i = 0; i < num_threads; ++i) {
            threads_.emplace_back([this] {
                while (true) {
                    function<void()> task;
                    // The reason for putting the below code
                    // here is to unlock the queue before
                    // executing the task so that other
                    // threads can perform enqueue tasks
                    {
                        // Locking the queue so that data
                        // can be shared safely
                        unique_lock<mutex> lock(
                            queue_mutex_);
  
                        // Waiting until there is a task to
                        // execute or the pool is stopped
                        cv_.wait(lock, [this] {
                            return !tasks_.empty() || stop_;
                        });
  
                        // exit the thread in case the pool
                        // is stopped and there are no tasks
                        if (stop_ && tasks_.empty()) {
                            return;
                        }
  
                        // Get the next task from the queue
                        task = move(tasks_.front());
                        tasks_.pop();
                    }
  
                    task();
                }
            });
        }
    }
  
    // Destructor to stop the thread pool
    ~ThreadPool()
    {
        {
            // Lock the queue to update the stop flag safely
            unique_lock<mutex> lock(queue_mutex_);
            stop_ = true;
        }
  
        // Notify all threads
        cv_.notify_all();
  
        // Joining all worker threads to ensure they have
        // completed their tasks
        for (auto& thread : threads_) {
            thread.join();
        }
    }
  
    // Enqueue task for execution by the thread pool
    void enqueue(function<void()> task)
    {
        {
            unique_lock<std::mutex> lock(queue_mutex_);
            tasks_.emplace(move(task));
        }
        cv_.notify_one();
    }
  
private:
    // Vector to store worker threads
    vector<thread> threads_;
  
    // Queue of tasks
    queue<function<void()> > tasks_;
  
    // Mutex to synchronize access to shared data
    mutex queue_mutex_;
  
    // Condition variable to signal changes in the state of
    // the tasks queue
    condition_variable cv_;
  
    // Flag to indicate whether the thread pool should stop
    // or not
    bool stop_ = false;
};
  
int main()
{
    // Create a thread pool with 4 threads
    ThreadPool pool(4);
  
    // Enqueue tasks for execution
    for (int i = 0; i < 5; ++i) {
        pool.enqueue([i] {
            cout << "Task " << i << " is running on thread "
                 << this_thread::get_id() << endl;
            // Simulate some work
            this_thread::sleep_for(
                chrono::milliseconds(100));
        });
    }
  
    return 0;
}

Output

Task 0 is running on thread 140178994148928
Task 1 is running on thread 140178985756224
Task 2 is running on thread 140179010934336
Task 3 is running on thread 140179002541632
Task 4 is running on thread 140178994148928

Explanation

In the above code, we have used the following C++ features for the implementation of the thread pool:

  1. A vector of worker threads, a task queue, a mutex for synchronization, a condition variable for signaling, and a boolean flag to indicate whether the pool should stop are all managed by the ThreadPool class.
  2. The worker threads are initialized by the constructor, who then puts them in an endless loop while they wait for jobs to be enqueued. We use a wrapper class std::function over the given tasks.
  3. A job is added to the queue and one of the worker threads is notified to begin executing it using the enqueue method.
  4. To guarantee a clean shutdown, the destructor joins the worker threads, sets the stop flag, and informs all threads.
  5. A ThreadPool with four threads is formed in the main function. Ten jobs are queued up, each of which prints a message including the task number and the thread ID that is currently carrying it out.
  6. It should be noted that in a real-world situation, you would usually connect the threads or use some other kind of synchronization to make sure that all jobs are finished before the program ends.

Note The main function may exit before all tasks are completed,so it’s a good practice to join the threads or wait for task completion before the program exits in a production scenario.

Advantages of Thread Pooling in C++

The following are some main advantages of thread pooling in C++:

Disadvantages of Thread Pooling in C++

Thread pooling also have some limitations which are mentioned below:

Conclusion

C++ thread pools offer an effective method of managing several tasks at once, with advantages in resource management, performance, and scalability. Through the use of threads, developers may build high-performing, responsive apps in a methodical and controlled manner and thread pooling optimizes the use of resources and minimizes the thread creation and deletion overhead


Article Tags :