This article aims how to lock the threads and critical sections in the given program to avoid race conditions. So, using Lock object in the threading library to make mutable objects safe to use by multiple threads.
Code #1 :
import threading
class counter_share:
def __init__( self , initial_key = 0 ):
self ._key = initial_key
self ._key_lock = threading.Lock()
def incr( self , delta = 1 ):
with self ._key_lock:
self ._key + = delta
def decr( self , delta = 1 ):
with self ._key_lock:
self ._key - = delta
|
Using a with statement along with the lock ensures the mutual exclusion. By exclusion, it is meant that at a time only one thread (under with statement) is allowed to execute the block of a statement.
The lock for the duration of intended statements is acquired and is released when the control flow exits the indented block. Thread scheduling is inherently nondeterministic. Because of this, randomly corrupted data and ‘race condition’ can result as there is a failure to use locks. So, whenever a shared mutable state is accessed by multiple threads, locks should always be used to avoid this.
In older Python code, it is common to see locks explicitly acquired and released.
Code #2 : Variant of code 1
import threading
class counter_share:
def __init__( self , initial_key = 0 ):
self ._key = initial_key
self ._key_lock = threading.Lock()
def incr( self , delta = 1 ):
self ._key_lock.acquire()
self ._key + = delta
self ._key_lock.release()
def decr( self , delta = 1 ):
self ._key_lock.acquire()
self ._key - = delta
self ._key_lock.release()
|
- In situations where
release()
method is not called or while holding a lock no exception is raised, the with statement is much less prone to error. - Each thread in a program is not allowed to acquire one lock at a time, this can potentially avoid the cases of deadlock. Introduce more advanced deadlock avoidance into the program if it is not possible.
- Synchronization primitives, such as RLock and Semaphore objects are found in the threading library.
Except the simple module locking, there is some more special purpose being solved :
- An RLock or re-entrant lock object is a lock that can be acquired multiple times by the same thread.
- It implements code primarily on the basis of locking or synchronization a construct known as a “monitor.” Only one thread is allowed to use an entire function or the methods of a class while the lock is held, with this type of locking.
Code #3 : Implementing the SharedCounter class.
import threading
class counter_share:
_lock = threading.RLock()
def __init__( self , initial_key = 0 ):
self ._key = initial_key
def incr( self , delta = 1 ):
with SharedCounter._lock:
self ._key + = delta
def decr( self , delta = 1 ):
with SharedCounter._lock:
self .incr( - delta)
|
- The lock is meant to synchronize the methods of the class, inspite of lock being tied to the per-instance mutable state.
- There is just a single class-level lock shared by all instances of the class in this code variant.
- Only one thread is allowed to be using the methods of the class at once is ensured.
- it is OK for methods to call other methods that also use the lock if they already have the locks. (for example the
decr()
method). - If there are a large number of counters, it is much more memory-efficient. However, it may cause more lock contention in programs that use a large number of threads and make frequent counter updates.
A Semaphore item is a synchronization crude dependent on a mutual counter. The counter is increased upon the finish of the with square. On the off chance that the counter is zero, advance is obstructed until the counter is increased by another string. On the off chance that the counter is nonzero, the with explanation decrements the tally and a string is permitted to continue.
Rather than straightforward locking, Semaphore items are increasingly valuable for applications including motioning between strings or throttling. Despite the fact that a semaphore can be utilized in a similar way as a standard Lock, the additional multifaceted nature in usage contrarily impacts execution.
Code #4 : To limit the amount of concurrency in a part of code, use a semaphore.
from threading import Semaphore
import urllib.request
_fetch_url_sema = Semaphore( 5 )
def fetch_url(url):
with _fetch_url_sema:
return urllib.request.urlopen(url)
|