Open In App

Complete Tutorial on LRU Cache with Implementations

Last Updated : 29 Mar, 2024
Improve
Improve
Like Article
Like
Save
Share
Report

What is LRU Cache?

Cache replacement algorithms are efficiently designed to replace the cache when the space is full. The Least Recently Used (LRU) is one of those algorithms. As the name suggests when the cache memory is full, LRU picks the data that is least recently used and removes it in order to make space for the new data. The priority of the data in the cache changes according to the need of that data i.e. if some data is fetched or updated recently then the priority of that data would be changed and assigned to the highest priority, and the priority of the data decreases if it remains unused operations after operations.

LRU algorithm is a standard problem and it may have variations as per need for example, in operating systems LRU plays a crucial role as it can be used as a page replacement algorithm in order to minimize page faults.

Operations on LRU Cache:

  • LRUCache (Capacity c): Initialize LRU cache with positive size capacity c.
  • get (key): Returns the value of Key ‘k’ if it is present in the cache otherwise it returns -1. Also updates the priority of data in the LRU cache.
  • put (key, value): Update the value of the key if that key exists, Otherwise, add key-value pair to the cache. If the number of keys exceeded the capacity of LRU cache then dismiss the least recently used key.

Working of LRU Cache:

Let’s suppose we have an LRU cache of capacity 3, and we would like to perform following operations:

  1. Put (key=1, value=A) into the cache
  2. Put (key=2, value=B) into the cache
  3. Put (key=3, value=C) into the cache
  4. Get (key=2) from the cache
  5. Get (key=4) from the cache
  6. Put (key=4, value=D) into the cache
  7. Put (key=3, value=E) into the cache
  8. Get (key=4) from the cache
  9. Put (key=1, value=A) into the cache

The above operations are performed one after another as shown in the image below:

Working-of-LRU-Cache-(1)

Detailed Explanation of each operation:

  1. Put (key 1, value A): Since LRU cache has empty capacity=3, there is no need for any replacement and we put {1 : A} at the top i.e. {1 : A} has highest priority.
  2. Put (key 2, value B): Since LRU cache has empty capacity=2, again there is no need for any replacement, but now the {2 : B} has the highest priority and priority of {1 : A} decrease.
  3. Put (key 3, value C): Still there is 1 empty space vacant in the cache, therefore put {3 : C} without any replacement, notice now the cache is full and the current order of priority from highest to lowest is {3:C}, {2:B}, {1:A}.
  4. Get (key 2): Now, return value of key=2 during this operation, also since key=2 is used, now the new priority order is {2:B}, {3:C}, {1:A}
  5. Get (key 4): Observe that key 4 is not present in the cache, we return ‘-1’ for this operation.
  6. Put (key 4, value D): Observe that cache is FULL, now use LRU algorithm to determine which key is least recently used. Since {1:A} had the least priority, remove {1:A} from our cache and put {4:D} into the cache. Notice that the new priority order is {4:D}, {2:B}, {3:C}
  7. Put (key 3, value E): Since key=3 was already present in the cache having value=C, so this operation won’t result in removal of any key, rather it will update the value of key=3 to ‘E’. Now, the new order of priority will become {3:E}, {4:D}, {2:B}
  8. Get (key 4): Return the value of key=4. Now, new priority will become {4:D}, {3:E}, {2:B}
  9. Put (key 1, value A): Since our cache is FULL, so use our LRU algorithm to determine which key was least recently used, and since {2:B} had the least priority, remove {2:B} from our cache and put {1:A} into the cache. Now, the new priority order is {1:A}, {4:D}, {3:E}

Ways to Implement LRU Cache:

LRU cache can be implemented in a variety of ways, and each programmer may choose a different approach. However, below are commonly used approaches:

  1. LRU using Queue and Hashing
  2. LRU using Doubly Linked List + Hashing
  3. LRU using Deque
  4. LRU using Stack
  5. LRU using Counter implementation
  6. LRU using Lazy Updates

LRU cache implementation using Queue and Hashing:

We use two data structures to implement an LRU Cache.  

  1. Queue is implemented using a doubly-linked list. The maximum size of the queue will be equal to the total number of frames available (cache size). The most recently used pages will be near the front end and the least recently used pages will be near the rear end.
  2. A Hash with the page number as key and the address of the corresponding queue node as value.

When a page is referenced, the required page may be in the memory. If it is in the memory, we need to detach the node of the list and bring it to the front of the queue. 
If the required page is not in memory, we bring that in memory. In simple words, we add a new node to the front of the queue and update the corresponding node address in the hash. If the queue is full, i.e. all the frames are full, we remove a node from the rear of the queue, and add the new node to the front of the queue.

Illustration:

Let’s Consider the operations, Refers key x with in the LRU cache: { 1, 2, 3, 4, 1, 2, 5, 1, 2, 3 }
Note: Initially no page is in the memory.

Below images shows step by step execution of the above operations on the LRU cache.

LRU-queue-Implementation-min-(1)

Algorithm:

  • Create a class LRUCache with declare a list of type int, an unordered map of type <int, list<int>>, and a variable to store the maximum size of the cache
  • In the refer function of LRUCache
    • If this value is not present in the queue then push this value in front of the queue and remove the last value if the queue is full
    • If the value is already present then remove it from the queue and push it in the front of the queue
  • In the display function print, the LRUCache using the queue starting from the front

Below is the implementation of the above approach:

C++




// We can use stl container list as a double
// ended queue to store the cache keys, with
// the descending time of reference from front
// to back and a set container to check presence
// of a key. But to fetch the address of the key
// in the list using find(), it takes O(N) time.
// This can be optimized by storing a reference
//     (iterator) to each key in a hash map.
#include <bits/stdc++.h>
using namespace std;
  
class LRUCache {
    // store keys of cache
    list<int> dq;
  
    // store references of key in cache
    unordered_map<int, list<int>::iterator> ma;
    int csize; // maximum capacity of cache
  
public:
    LRUCache(int);
    void refer(int);
    void display();
};
  
// Declare the size
LRUCache::LRUCache(int n) { csize = n; }
  
// Refers key x with in the LRU cache
void LRUCache::refer(int x)
{
    // not present in cache
    if (ma.find(x) == ma.end()) {
        // cache is full
        if (dq.size() == csize) {
            // delete least recently used element
            int last = dq.back();
  
            // Pops the last element
            dq.pop_back();
  
            // Erase the last
            ma.erase(last);
        }
    }
  
    // present in cache
    else
        dq.erase(ma[x]);
  
    // update reference
    dq.push_front(x);
    ma[x] = dq.begin();
}
  
// Function to display contents of cache
void LRUCache::display()
{
  
    // Iterate in the deque and print
    // all the elements in it
    for (auto it = dq.begin(); it != dq.end(); it++)
        cout << (*it) << " ";
  
    cout << endl;
}
  
// Driver Code
int main()
{
    LRUCache ca(4);
  
    ca.refer(1);
    ca.refer(2);
    ca.refer(3);
    ca.refer(1);
    ca.refer(4);
    ca.refer(5);
    ca.display();
  
    return 0;
}
// This code is contributed by Satish Srinivas


C




// A C program to show implementation of LRU cache
#include <stdio.h>
#include <stdlib.h>
  
// A Queue Node (Queue is implemented using Doubly Linked
// List)
typedef struct QNode {
    struct QNode *prev, *next;
    unsigned
        pageNumber; // the page number stored in this QNode
} QNode;
  
// A Queue (A FIFO collection of Queue Nodes)
typedef struct Queue {
    unsigned count; // Number of filled frames
    unsigned numberOfFrames; // total number of frames
    QNode *front, *rear;
} Queue;
  
// A hash (Collection of pointers to Queue Nodes)
typedef struct Hash {
    int capacity; // how many pages can be there
    QNode** array; // an array of queue nodes
} Hash;
  
// A utility function to create a new Queue Node. The queue
// Node will store the given 'pageNumber'
QNode* newQNode(unsigned pageNumber)
{
    // Allocate memory and assign 'pageNumber'
    QNode* temp = (QNode*)malloc(sizeof(QNode));
    temp->pageNumber = pageNumber;
  
    // Initialize prev and next as NULL
    temp->prev = temp->next = NULL;
  
    return temp;
}
  
// A utility function to create an empty Queue.
// The queue can have at most 'numberOfFrames' nodes
Queue* createQueue(int numberOfFrames)
{
    Queue* queue = (Queue*)malloc(sizeof(Queue));
  
    // The queue is empty
    queue->count = 0;
    queue->front = queue->rear = NULL;
  
    // Number of frames that can be stored in memory
    queue->numberOfFrames = numberOfFrames;
  
    return queue;
}
  
// A utility function to create an empty Hash of given
// capacity
Hash* createHash(int capacity)
{
    // Allocate memory for hash
    Hash* hash = (Hash*)malloc(sizeof(Hash));
    hash->capacity = capacity;
  
    // Create an array of pointers for referring queue nodes
    hash->array
        = (QNode**)malloc(hash->capacity * sizeof(QNode*));
  
    // Initialize all hash entries as empty
    int i;
    for (i = 0; i < hash->capacity; ++i)
        hash->array[i] = NULL;
  
    return hash;
}
  
// A function to check if there is slot available in memory
int AreAllFramesFull(Queue* queue)
{
    return queue->count == queue->numberOfFrames;
}
  
// A utility function to check if queue is empty
int isQueueEmpty(Queue* queue)
{
    return queue->rear == NULL;
}
  
// A utility function to delete a frame from queue
void deQueue(Queue* queue)
{
    if (isQueueEmpty(queue))
        return;
  
    // If this is the only node in list, then change front
    if (queue->front == queue->rear)
        queue->front = NULL;
  
    // Change rear and remove the previous rear
    QNode* temp = queue->rear;
    queue->rear = queue->rear->prev;
  
    if (queue->rear)
        queue->rear->next = NULL;
  
    free(temp);
  
    // decrement the number of full frames by 1
    queue->count--;
}
  
// A function to add a page with given 'pageNumber' to both
// queue and hash
void Enqueue(Queue* queue, Hash* hash, unsigned pageNumber)
{
    // If all frames are full, remove the page at the rear
    if (AreAllFramesFull(queue)) {
        // remove page from hash
        hash->array[queue->rear->pageNumber] = NULL;
        deQueue(queue);
    }
  
    // Create a new node with given page number,
    // And add the new node to the front of queue
    QNode* temp = newQNode(pageNumber);
    temp->next = queue->front;
  
    // If queue is empty, change both front and rear
    // pointers
    if (isQueueEmpty(queue))
        queue->rear = queue->front = temp;
    else // Else change the front
    {
        queue->front->prev = temp;
        queue->front = temp;
    }
  
    // Add page entry to hash also
    hash->array[pageNumber] = temp;
  
    // increment number of full frames
    queue->count++;
}
  
// This function is called when a page with given
// 'pageNumber' is referenced from cache (or memory). There
// are two cases:
// 1. Frame is not there in memory, we bring it in memory
// and add to the front of queue
// 2. Frame is there in memory, we move the frame to front
// of queue
void ReferencePage(Queue* queue, Hash* hash,
                   unsigned pageNumber)
{
    QNode* reqPage = hash->array[pageNumber];
  
    // the page is not in cache, bring it
    if (reqPage == NULL)
        Enqueue(queue, hash, pageNumber);
  
    // page is there and not at front, change pointer
    else if (reqPage != queue->front) {
        // Unlink rquested page from its current location
        // in queue.
        reqPage->prev->next = reqPage->next;
        if (reqPage->next)
            reqPage->next->prev = reqPage->prev;
  
        // If the requested page is rear, then change rear
        // as this node will be moved to front
        if (reqPage == queue->rear) {
            queue->rear = reqPage->prev;
            queue->rear->next = NULL;
        }
  
        // Put the requested page before current front
        reqPage->next = queue->front;
        reqPage->prev = NULL;
  
        // Change prev of current front
        reqPage->next->prev = reqPage;
  
        // Change front to the requested page
        queue->front = reqPage;
    }
}
  
// Driver code
int main()
{
    // Let cache can hold 4 pages
    Queue* q = createQueue(4);
  
    // Let 10 different pages can be requested (pages to be
    // referenced are numbered from 0 to 9
    Hash* hash = createHash(10);
  
    // Let us refer pages 1, 2, 3, 1, 4, 5
    ReferencePage(q, hash, 1);
    ReferencePage(q, hash, 2);
    ReferencePage(q, hash, 3);
    ReferencePage(q, hash, 1);
    ReferencePage(q, hash, 4);
    ReferencePage(q, hash, 5);
  
    // Let us print cache frames after the above referenced
    // pages
    printf("%d ", q->front->pageNumber);
    printf("%d ", q->front->next->pageNumber);
    printf("%d ", q->front->next->next->pageNumber);
    printf("%d ", q->front->next->next->next->pageNumber);
  
    return 0;
}


Java




/* We can use Java inbuilt Deque as a double
   ended queue to store the cache keys, with
   the descending time of reference from front
   to back and a set container to check presence
   of a key. But remove a key from the Deque using
   remove(), it takes O(N) time. This can be
   optimized by storing a reference (iterator) to
   each key in a hash map. */
import java.util.Deque;
import java.util.HashSet;
import java.util.Iterator;
import java.util.LinkedList;
  
public class LRUCache {
  
    // store keys of cache
    private Deque<Integer> doublyQueue;
  
    // store references of key in cache
    private HashSet<Integer> hashSet;
  
    // maximum capacity of cache
    private final int CACHE_SIZE;
  
    LRUCache(int capacity)
    {
        doublyQueue = new LinkedList<>();
        hashSet = new HashSet<>();
        CACHE_SIZE = capacity;
    }
  
    /* Refer the page within the LRU cache */
    public void refer(int page)
    {
        if (!hashSet.contains(page)) {
            if (doublyQueue.size() == CACHE_SIZE) {
                int last = doublyQueue.removeLast();
                hashSet.remove(last);
            }
        }
        else { /* The found page may not be always the last
                element, even if it's an intermediate
                element that needs to be removed and added
                to the start of the Queue */
            doublyQueue.remove(page);
        }
        doublyQueue.push(page);
        hashSet.add(page);
    }
  
    // display contents of cache
    public void display()
    {
        Iterator<Integer> itr = doublyQueue.iterator();
        while (itr.hasNext()) {
            System.out.print(itr.next() + " ");
        }
    }
  
      // Driver code
    public static void main(String[] args)
    {
        LRUCache cache = new LRUCache(4);
        cache.refer(1);
        cache.refer(2);
        cache.refer(3);
        cache.refer(1);
        cache.refer(4);
        cache.refer(5);
        cache.display();
    }
}
// This code is contributed by Niraj Kumar


Python3




# We can use stl container list as a double
# ended queue to store the cache keys, with
# the descending time of reference from front
# to back and a set container to check presence
# of a key. But to fetch the address of the key
# in the list using find(), it takes O(N) time.
# This can be optimized by storing a reference
# (iterator) to each key in a hash map.
class LRUCache:
    # store keys of cache
    def __init__(self, n):
        self.csize = n
        self.dq = []
        self.ma = {}
  
  
    # Refers key x with in the LRU cache
    def refer(self, x):
          
        #  not present in cache
        if x not in self.ma.keys():
            # cache is full
            if len(self.dq) == self.csize:
                # delete least recently used element
                last = self.dq[-1]
  
                # Pops the last element
                ele = self.dq.pop();
  
                # Erase the last
                del self.ma[last]
  
        # present in cache
        else:
            del self.dq[self.ma[x]]
  
        # update reference
        self.dq.insert(0, x)
        self.ma[x] = 0;
  
    # Function to display contents of cache
    def display(self):
  
        # Iterate in the deque and print
        # all the elements in it
        print(self.dq)
  
# Driver Code
ca = LRUCache(4)
  
ca.refer(1)
ca.refer(2)
ca.refer(3)
ca.refer(1)
ca.refer(4)
ca.refer(5)
ca.display()
# This code is contributed by Satish Srinivas


C#




// C# program to implement the approach
  
using System;
using System.Collections.Generic;
  
class LRUCache {
    // store keys of cache
    private List<int> doublyQueue;
  
    // store references of key in cache
    private HashSet<int> hashSet;
  
    // maximum capacity of cache
    private int CACHE_SIZE;
  
    public LRUCache(int capacity)
    {
        doublyQueue = new List<int>();
        hashSet = new HashSet<int>();
        CACHE_SIZE = capacity;
    }
  
    /* Refer the page within the LRU cache */
    public void Refer(int page)
    {
        if (!hashSet.Contains(page)) {
            if (doublyQueue.Count == CACHE_SIZE) {
                int last
                    = doublyQueue[doublyQueue.Count - 1];
                doublyQueue.RemoveAt(doublyQueue.Count - 1);
                hashSet.Remove(last);
            }
        }
        else {
            /* The found page may not be always the last
               element, even if it's an intermediate
               element that needs to be removed and added
               to the start of the Queue */
            doublyQueue.Remove(page);
        }
        doublyQueue.Insert(0, page);
        hashSet.Add(page);
    }
  
    // display contents of cache
    public void Display()
    {
        foreach(int page in doublyQueue)
        {
            Console.Write(page + " ");
        }
    }
  
    // Driver code
    static void Main(string[] args)
    {
        LRUCache cache = new LRUCache(4);
        cache.Refer(1);
        cache.Refer(2);
        cache.Refer(3);
        cache.Refer(1);
        cache.Refer(4);
        cache.Refer(5);
        cache.Display();
    }
}
  
// This code is contributed by phasing17


Javascript




// JS code to implement the approach
class LRUCache {
  constructor(n) {
    this.csize = n;
    this.dq = [];
    this.ma = new Map();
  }
  
  refer(x) {
    if (!this.ma.has(x)) {
      if (this.dq.length === this.csize) {
        const last = this.dq[this.dq.length - 1];
        this.dq.pop();
        this.ma.delete(last);
      }
    } else {
      this.dq.splice(this.dq.indexOf(x), 1);
    }
  
    this.dq.unshift(x);
    this.ma.set(x, 0);
  }
  
  display() {
    console.log(this.dq);
  }
}
  
const cache = new LRUCache(4);
  
cache.refer(1);
cache.refer(2);
cache.refer(3);
cache.refer(1);
cache.refer(4);
cache.refer(5);
cache.display();
  
// This code is contributed by phasing17


LRU cache implementation using Doubly Linked List and Hashing:

The idea is very basic, i.e. keep inserting the elements at the head.

  • if the element is not present in the list then add it to the head of the list
  • if the element is present in the list then move the element to the head and shift the remaining element of the list

Note that the priority of the node will depend upon the distance of that node from the Head, the closest the node is to the head, higher the priority it has. So when the Cache size is full and we need to remove some element, we remove the element adjacent to the tail of the doubly linked list.

LRU cache implementation using Deque & Hashmap:

Deque data structure allows insertion and deletion from front as well as end, this property allows the implementation of LRU possible as Front element can represent high priority element while the end element can be the low priority element i.e. Least Recently used.

Working:

  1. Get Operation: Checks if the key exist in the cache’s hash map and follow the below cases on the deque:
    • If key is found:
      • The item is considered as recently used, so it is moved to the front of the deque.
      • The value associated with the key is returned as the result of the get operation.
    • If key is not found:
      • return -1 to indicate no such key is present.
  2. Put Operation: First check if the key already exists in the cache’s hash map and follow the below cases on the deque
    • If key exists:
      • The value associated with the key is updated.
      • The item is moved to the front of the deque since it’s now the most recently used.
    • If key does not exist:
      • If the cache is full, it means a new item needs to be inserted, and the least recently used item must be evicted. This is done by removing the item from the end of the deque and the corresponding entry from the hash map.
      • The new key-value pair is then inserted into both the hash map and the front of the deque to signify that it’s the most recently used item

LRU cache implementation using Stack & Hashmap:

Implementing an LRU (Least Recently Used) cache using a stack data structure and hashing can be a bit tricky because a simple stack doesn’t provide efficient access to the least recently used item. However, you can combine a stack with a hash map to achieve this efficiently. Here’s a high-level approach to implement it:

  1. Use a Hash Map: The hash map will store the key-value pairs of the cache. The keys will map to the corresponding nodes in the stack.
  2. Use a Stack: The stack will maintain the order of keys based on their usage. The least recently used item will be at the bottom of the stack, and the most recently used item will be at the top

This approach is not that efficient and hence not used often.

LRU cache using Counter Implementation:

Each block in the cache will have its own LRU Counter where the value of the counter belongs to {0 to n-1}, here ‘n‘ represents the size of the cache. The block that is changed during block replacement becomes the MRU block, and as a result, its counter value is changed to n – 1. The counter values greater than the accessed block’s counter value are decremented by one. The remaining blocks are unaltered.

Value of Conter

Priority

Used status

0

Low

Least Recently used

n-1

High

Most Recently used

LRU cache implementation using Lazy Updates:

Implementing an LRU (Least Recently Used) cache using lazy updates is a common technique to improve the efficiency of the cache’s operations. Lazy updates involve tracking the order in which items are accessed without immediately updating the entire data structure. When a cache miss occurs, you can then decide whether or not to perform a full update based on some criteria.

Complexity Analysis of LRU Cache:

  • Time Complexity:
    • Put() operation: O(1) i.e. time required to insert or update new key-value pair is constant
    • Get() operation: O(1) i.e. time required to get the value of a key is constant
  • Auxiliary Space: O(N) where N is the capacity of the Cache.

Advantages of LRU cache:

  • Fast Access: It takes O(1) time to access the data from the LRU cache.
  • Fast Update: It takes O(1) time to update a key-value pair in the LRU cache.
  • Fast removal of Least recently used data: It takes O(1) delete that which has not been recently used.
  • No thrashing: LRU is less susceptible to thrashing compared to FIFO because it considers the usage history of pages. It can detect which pages are being used frequently and prioritize them for memory allocation, reducing the number of page faults and disk I/O operations.

Disadvantages of LRU cache:

  • It requires large cache size to increase efficiency.
  • It requires additional Data Structure to be implemented.
  • Hardware assistance is high.
  • In LRU error detection is difficult as compared to other algorithms.
  • It has limited acceptability.

Real-World Application of LRU Cache:

  • In Database Systems for fast query results.
  • In Operating Systems to minimize page faults.
  • Text Editors and IDEs to store frequently used files or code snippets
  • Network routers and switches use LRU to increase the efficiency of computer network
  • In compiler optimizations
  • Text Prediction and autocompletion tools
     


Like Article
Suggest improvement
Previous
Next
Share your thoughts in the comments

Similar Reads