Open In App

Concurrency Control in DBMS

Improve
Improve
Improve
Like Article
Like
Save Article
Save
Share
Report issue
Report

Concurrency control is a very important concept of DBMS which ensures the simultaneous execution or manipulation of data by several processes or user without resulting in data inconsistency. Concurrency Control deals with interleaved execution of more than one transaction.

What is Transaction? 

A transaction is a collection of operations that performs a single logical function in a database application. Each transaction is a unit of both atomicity and consistency. Thus, we require that transactions do not violate any database consistency constraints. That is, if the database was consistent when a transaction started, the database must be consistent when the transaction successfully terminates. However, during the execution of a transaction, it may be necessary temporarily to allow inconsistency, since either the debit of A or the credit of B must be done before the other. This temporary inconsistency, although necessary, may lead to difficulty if a failure occurs.

It is the programmer’s responsibility to define properly the various transactions, so that each preserves the consistency of the database. For example, the transaction to transfer funds from the account of department A to the account of department B could be defined to be composed of two separate programs: one that debits account A, and another that credits account B. The execution of these two programs one after the other will indeed preserve consistency. However, each program by itself does not transform the database from a consistent state to a new consistent state. Thus, those programs are not transactions.

The concept of a transaction has been applied broadly in database systems and applications. While the initial use of transactions was in financial applications, the concept is now used in real-time applications in telecommunication, as well as in the management of long-duration activities such as product design or administrative workflows.

A set of logically related operations is known as a transaction. The main operations of a transaction are:

  • Read(A): Read operations Read(A) or R(A) reads the value of A from the database and stores it in a buffer in the main memory.
  • Write (A): Write operation Write(A) or W(A) writes the value back to the database from the buffer. 

(Note: It doesn’t always need to write it to a database back it just writes the changes to buffer this is the reason where dirty read comes into the picture) 

Let us take a debit transaction from an account that consists of the following operations:

  1. R(A);
  2. A=A-1000;
  3. W(A);

Assume A’s value before starting the transaction is 5000.

  • The first operation reads the value of A from the database and stores it in a buffer.
  • the Second operation will decrease its value by 1000. So buffer will contain 4000.
  • the Third operation will write the value from the buffer to the database. So A’s final value will be 4000.

But it may also be possible that the transaction may fail after executing some of its operations. The failure can be because of hardware, software or power, etc. For example, if the debit transaction discussed above fails after executing operation 2, the value of A will remain 5000 in the database which is not acceptable by the bank. To avoid this, Database has two important operations: 

  • Commit: After all instructions of a transaction are successfully executed, the changes made by a transaction are made permanent in the database.
  • Rollback: If a transaction is not able to execute all operations successfully, all the changes made by a transaction are undone.

For more details please refer Transaction Control in DBMS article. 

Properties of a Transaction

Atomicity: As a transaction is a set of logically related operations, either all of them should be executed or none. A debit transaction discussed above should either execute all three operations or none. If the debit transaction fails after executing operations 1 and 2 then its new value of 4000 will not be updated in the database which leads to inconsistency.

Consistency: If operations of debit and credit transactions on the same account are executed concurrently, it may leave the database in an inconsistent state.

  • For Example, with T1 (debit of Rs. 1000 from A) and T2 (credit of 500 to A) executing concurrently, the database reaches an inconsistent state.
  • Let us assume the Account balance of A is Rs. 5000. T1 reads A(5000) and stores the value in its local buffer space. Then T2 reads A(5000) and also stores the value in its local buffer space.
  • T1 performs A=A-1000 (5000-1000=4000) and 4000 is stored in T1 buffer space. Then T2 performs A=A+500 (5000+500=5500) and 5500 is stored in the T2 buffer space. T1 writes the value from its buffer back to the database.
  • A’s value is updated to 4000 in the database and then T2 writes the value from its buffer back to the database. A’s value is updated to 5500 which shows that the effect of the debit transaction is lost and the database has become inconsistent.
  • To maintain consistency of the database, we need concurrency control protocols which will be discussed in the next article.  The operations of T1 and T2 with their buffers and database have been shown in Table 1.
T1 T1’s buffer space T2 T2’s Buffer Space Database
        A=5000
R(A); A=5000     A=5000
  A=5000 R(A); A=5000 A=5000
A=A-1000; A=4000   A=5000 A=5000
  A=4000 A=A+500; A=5500  
W(A);     A=5500 A=4000
    W(A);   A=5500

Isolation: The result of a transaction should not be visible to others before the transaction is committed. For example, let us assume that A’s balance is Rs. 5000 and T1 debits Rs. 1000 from A. A’s new balance will be 4000. If T2 credits Rs. 500 to A’s new balance, A will become 4500, and after this T1 fails. Then we have to roll back T2 as well because it is using the value produced by T1. So transaction results are not made visible to other transactions before it commits.

Durable: Once the database has committed a transaction, the changes made by the transaction should be permanent. e.g.; If a person has credited $500000 to his account, the bank can’t say that the update has been lost. To avoid this problem, multiple copies of the database are stored at different locations.

What is a Schedule? 

A schedule is a series of operations from one or more transactions. A schedule can be of two types: 

Serial Schedule: When one transaction completely executes before starting another transaction, the schedule is called a serial schedule. A serial schedule is always consistent. e.g.; If a schedule S has debit transaction T1 and credit transaction T2, possible serial schedules are T1 followed by T2 (T1->T2) or T2 followed by T1 ((T2->T1). A serial schedule has low throughput and less resource utilization.

Concurrent Schedule: When operations of a transaction are interleaved with operations of other transactions of a schedule, the schedule is called a Concurrent schedule. e.g.; the Schedule of debit and credit transactions shown in Table 1 is concurrent. But concurrency can lead to inconsistency in the database.  The above example of a concurrent schedule is also inconsistent.

Difference between Serial Schedule and Serializable Schedule

                           Serial Schedule                             Serializable Schedule
In Serial schedule, transactions will be executed one after other. In Serializable schedule transaction are executed concurrently.
Serial schedule are less efficient. Serializable schedule are more efficient.
In serial schedule only one transaction executed at a time. In Serializable schedule multiple transactions can be executed at a time.
Serial schedule takes more time for execution. In Serializable schedule execution is fast. 

Concurrency Control in DBMS

  • Executing a single transaction at a time will increase the waiting time of the other transactions which may result in delay in the overall execution. Hence for increasing the overall throughput and efficiency of the system, several transactions are executed.
  • Concurrency control is a very important concept of DBMS which ensures the simultaneous execution or manipulation of data by several processes or user without resulting in data inconsistency.
  • Concurrency control provides a procedure that is able to control concurrent execution of the operations in the database. 
  • The fundamental goal of database concurrency control is to ensure that concurrent execution of transactions does not result in a loss of database consistency. The concept of serializability can be used to achieve this goal, since all serializable schedules preserve consistency of the database. However, not all schedules that preserve consistency of the database are serializable.
  • In general it is not possible to perform an automatic analysis of low-level operations by transactions and check their effect on database consistency constraints. However, there are simpler techniques. One is to use the database consistency constraints as the basis for a split of the database into subdatabases on which concurrency can be managed separately.
  • Another is to treat some operations besides read and write as fundamental low-level operations and to extend concurrency control to deal with them.

Concurrency Control Problems

There are several problems that arise when numerous transactions are executed simultaneously in a random manner. The database transaction consist of two major operations “Read” and “Write”. It is very important to manage these operations in the concurrent execution of the transactions in order to maintain the consistency of the data. 

Dirty Read Problem(Write-Read conflict)

Dirty read problem occurs when one transaction updates an item but due to some unconditional events that transaction fails but before the transaction performs rollback, some other transaction reads the updated value. Thus creates an inconsistency in the database. Dirty read problem comes under the scenario of Write-Read conflict between the transactions in the database

  1. The lost update problem can be illustrated with the below scenario between two transactions T1 and T2.
  2. Transaction T1 modifies a database record without committing the changes.
  3. T2 reads the uncommitted data changed by T1
  4. T1 performs rollback
  5. T2 has already read the uncommitted data of T1 which is no longer valid, thus creating inconsistency in the database.

Lost Update Problem

Lost update problem occurs when two or more transactions modify the same data, resulting in the update being overwritten or lost by another transaction. The lost update problem can be illustrated with the below scenario between two transactions T1 and T2.

  1. T1 reads the value of an item from the database.
  2. T2 starts and reads the same database item.
  3. T1 updates the  value of that data and performs a commit.
  4. T2  updates the same data item based on its initial read and performs commit.
  5. This results in the modification of T1 gets lost by the T2’s write which causes a lost update problem in the database.

Concurrency Control Protocols

Concurrency control protocols are the set of rules which are maintained in order to solve the concurrency control problems in the database. It ensures that the concurrent transactions can execute properly while maintaining the database consistency. The concurrent execution of a transaction is provided with atomicity, consistency, isolation, durability, and serializability via the concurrency control protocols.

  • Locked based concurrency control protocol
  • Timestamp based concurrency control protocol

Locked based Protocol

In locked based protocol, each transaction needs to acquire locks before they start accessing or modifying the data items. There are two types of locks used in databases.

  • Shared Lock : Shared lock is also known as read lock which allows multiple transactions to read the data simultaneously. The transaction which is holding a shared lock can only read the data item but it can not modify the data item.
  • Exclusive Lock : Exclusive lock is also known as the write lock. Exclusive lock allows a transaction to update a data item. Only one transaction can hold the exclusive lock on a data item at a time. While a transaction is holding an exclusive lock on a data item, no other transaction is allowed to acquire a shared/exclusive lock on the same data item.

There are two kind of lock based protocol mostly used in database:

  • Two Phase Locking Protocol :  Two phase locking is a widely used technique which ensures strict ordering of lock acquisition and release. Two phase locking protocol works in two phases.
    • Growing Phase : In this phase, the transaction starts acquiring locks before performing any modification on the data items. Once a transaction acquires a lock, that lock can not be released until the transaction reaches the end of the execution.
    • Shrinking Phase : In this phase, the transaction releases all the acquired locks once it performs all the modifications on the data item. Once the transaction starts releasing the locks, it can not acquire any locks further. 
  • Strict Two Phase Locking  Protocol : It is almost similar to the two phase locking protocol the only difference is that in two phase locking the transaction can release its locks before it commits, but in case of strict two phase locking the transactions are only allowed to release the locks only when they performs commits. 

Timestamp based Protocol

  • In this protocol each transaction has a timestamp attached to it. Timestamp is nothing but the time in which a transaction enters into the system.
  • The conflicting pairs of operations can be resolved by the timestamp ordering protocol through the utilization of the timestamp values of the transactions. Therefore, guaranteeing that the transactions take place in the correct order.

Advantages of Concurrency

In general, concurrency means, that more than one transaction can work on a system. The advantages of a concurrent system are:

  • Waiting Time: It means if a process is in a ready state but still the process does not get the system to get execute is called waiting time. So, concurrency leads to less waiting time.
  • Response Time: The time wasted in getting the response from the cpu for the first time, is called response time. So, concurrency leads to less Response Time.
  • Resource Utilization: The amount of Resource utilization in a particular system is called Resource Utilization. Multiple transactions can run parallel in a system. So, concurrency leads to more Resource Utilization.
  • Efficiency: The amount of output produced in comparison to given input is called efficiency. So, Concurrency leads to more Efficiency.

Disadvantages of Concurrency 

  • Overhead: Implementing concurrency control requires additional overhead, such as acquiring and releasing locks on database objects. This overhead can lead to slower performance and increased resource consumption, particularly in systems with high levels of concurrency.
  • Deadlocks: Deadlocks can occur when two or more transactions are waiting for each other to release resources, causing a circular dependency that can prevent any of the transactions from completing. Deadlocks can be difficult to detect and resolve, and can result in reduced throughput and increased latency.
  • Reduced concurrency: Concurrency control can limit the number of users or applications that can access the database simultaneously. This can lead to reduced concurrency and slower performance in systems with high levels of concurrency.
  • Complexity: Implementing concurrency control can be complex, particularly in distributed systems or in systems with complex transactional logic. This complexity can lead to increased development and maintenance costs.
  • Inconsistency: In some cases, concurrency control can lead to inconsistencies in the database. For example, a transaction that is rolled back may leave the database in an inconsistent state, or a long-running transaction may cause other transactions to wait for extended periods, leading to data staleness and reduced accuracy.

Conclusion

Concurrency control ensures transaction atomicity, isolation, consistency, and serializability. Concurrency control issues occur when many transactions execute randomly. A dirty read happens when a transaction reads data changed by an uncommitted transaction. When two transactions update data simultaneously, the Lost Update issue occurs. Lock-based protocol prevents incorrect read/write activities. Timestamp-based protocols organise transactions by timestamp.

Important GATE Question

Question 1: Consider the following transaction involving two bank accounts x and y:

read(x);
x := x – 50;
write(x);
read(y);
y := y + 50;
write(y);

The constraint that the sum of the accounts x and y should remain constant is that of?

Atomicity
Consistency
Isolation
Durability                                                                                                                                                                                         

[GATE 2015]

Solution: 

As discussed in properties of transactions, consistency properties say that sum of accounts x and y should remain constant before starting and after completion of a transaction. So, the correct answer is B.

Article contributed by Sonal Tuteja.  



Last Updated : 12 Mar, 2024
Like Article
Save Article
Previous
Next
Share your thoughts in the comments
Similar Reads