Open In App

Eventual Consistency in Distributed Systems | Learn System Design

Consistency in a distributed system refers to the property that the data stored on different nodes of the system is always in agreement with each other. In other words, all nodes in the system have the same view of the data at all times.



What is the Importance of Data Consistency?

Data consistency is crucial for ensuring that all users and systems have access to the same, up-to-date information. It helps prevent errors, confusion, and conflicts that can arise from inconsistent data. Consistent data also ensures that business processes run smoothly and that decisions are based on accurate and reliable information.



What is Eventual Consistency?

Eventual consistency is a consistency model used in distributed systems where, after some time with no updates, all data replicas will eventually converge to a consistent state. This model allows for replicas of data to be inconsistent for a short period, enabling high availability and partition tolerance.

Characteristics of Eventual Consistency

The characteristics of eventual consistency include:

Real-Life Example of Eventual Consistency

Imagine you add an item to your shopping cart and then quickly check the cart to see if the item is there. Due to the distributed nature of the system, the cart information might be stored on different servers, and there could be a delay in updating all servers with the latest cart information.

Note: Eventual consistency is often used in systems that prioritize availability and partition tolerance over strict consistency.

How Eventual Consistency works?

  1. Write: A client sends a write request to a single replica (server node).
  2. Local Update: The replica immediately commits the update locally, making it accessible to local reads.
  3. Replication: The updated data is then sent asynchronously to other replicas through a chosen mechanism, like:
    • Message Queue: The update is pushed onto a queue, and different replicas pull and apply updates at their own pace.
    • Replication Protocol: A specific protocol dictates how updates are exchanged and applied, ensuring correctness and avoiding conflicts.
    • Gossip Protocol: Replicas periodically exchange information about their data, eventually converging to a consistent state.
  4. Inconsistency Window: During replication, different replicas might hold different versions of the data, creating an “inconsistency window.” This window varies based on factors like:
    • Network Latency: How long it takes messages to travel between replicas.
    • Replication Frequency: How often updates are sent and received.
    • Workload: The overall load on the system can impact replication speed.
  5. Convergence: Eventually, all replicas receive and apply the update, closing the inconsistency window and achieving consistency.

Use-Cases of Eventual Consistency

Below are the usecases of Eventual Consistency:

Impact of Eventual Consistency on (System performance, Scalability, and Availability)

Eventual consistency can have both positive and negative impacts on system performance, scalability, and availability:

Positive Impact

Negative Impact

Differences between Eventual Consistency and Strong Consistency

Below are the differences between Eventual and Strong Consistency:

Aspect Eventual Consistency Strong Consistency
What it means Data will be consistent eventually, but there might be a short delay before all copies are updated Data is always consistent, with updates immediately visible everywhere
When it’s used Good for things like social media feeds or shopping carts, where a slight delay in consistency is okay Needed for critical things like bank transactions, where all copies must be consistent at all times
How it works Data updates are allowed to happen without waiting for all copies to update immediately All copies of data must be updated at the same time, which can take longer and require more coordination
Dealing with conflicts May need to resolve conflicts if different copies are updated at the same time Conflicts are rare because all copies are updated together
Use Cases Suitable for applications where immediate consistency is not critical, such as social media feeds, shopping carts Critical for applications where strong consistency is required, such as financial transactions
Performance Impact Generally provides better performance and scalability due to relaxed consistency requirements May introduce higher latency and overhead due to coordination and synchronization requirements

Implementation of Eventual Consistency

Imagine you have a system where you want to store some information (like names and ages) but this system is split across many computers (nodes) to handle a lot of users. Each node has a copy of this information, and they need to stay in sync (consistent) with each other.

With eventual consistency, we’re okay with the information being slightly different on each node for a short time, as long as eventually (after some time), they all have the same correct information.

Below is the code of Eventual Consistency




#include <iostream>
#include <unordered_map>
#include <vector>
#include <thread>
#include <mutex>
#include <chrono>
 
using namespace std;
 
// Define a simple key-value store
unordered_map<string, string> kvStore;
 
// Mutex for thread safety
mutex mtx;
 
// Function to update a key-value pair
void updateKV(const string& key, const string& value) {
    // Simulate some time passing
    this_thread::sleep_for(chrono::milliseconds(100));
     
    // Acquire lock to update the key-value store
    lock_guard<mutex> lock(mtx);
    kvStore[key] = value;
}
 
// Function to retrieve a value for a given key
string getKV(const string& key) {
    // Simulate some time passing
    this_thread::sleep_for(chrono::milliseconds(50));
     
    // Acquire lock to read from the key-value store
    lock_guard<mutex> lock(mtx);
    if (kvStore.find(key) != kvStore.end()) {
        return kvStore[key];
    }
    return "";
}
 
int main() {
    // Initialize the key-value store
    kvStore["name"] = "Alice";
    kvStore["age"] = "30";
     
    // Simulate concurrent updates
    vector<thread> threads;
    threads.emplace_back(updateKV, "name", "Bob");
    threads.emplace_back(updateKV, "age", "35");
 
    // Simulate concurrent reads
    threads.emplace_back([]() { cout << "Name: " << getKV("name") << endl; });
    threads.emplace_back([]() { cout << "Age: " << getKV("age") << endl; });
 
    // Wait for all threads to finish
    for (auto& t : threads) {
        t.join();
    }
 
    // Output the final state of the key-value store
    cout << "Final state:" << endl;
    for (auto it = kvStore.begin(); it != kvStore.end(); ++it) {
        cout << it->first << ": " << it->second << endl;
    }
 
    return 0;
}

Below is the explanation of above code:

Benefits of Eventual Consistency

Below are the benefits of Eventual Consistency:

Challenges of Eventual Consistency

Below are the challenges of Eventual Consistency:


Article Tags :