LRU Cache Implementation
How to implement LRU caching scheme? What data structures should be used?
We are given the total possible page numbers that can be referred to. We are also given a cache (or memory) size (The number of page frames that the cache can hold at a time). The LRU caching scheme is to remove the least recently used frame when the cache is full and a new page is referenced which is not there in the cache. Please see the Galvin book for more details (see the LRU page replacement slide here)
LRU cache implementation using queue and hashing:
To solve the problem follow the below idea:
We use two data structures to implement an LRU Cache.
- Queue is implemented using a doubly-linked list. The maximum size of the queue will be equal to the total number of frames available (cache size). The most recently used pages will be near the front end and the least recently used pages will be near the rear end.
- A Hash with the page number as key and the address of the corresponding queue node as value.
When a page is referenced, the required page may be in the memory. If it is in the memory, we need to detach the node of the list and bring it to the front of the queue.
If the required page is not in memory, we bring that in memory. In simple words, we add a new node to the front of the queue and update the corresponding node address in the hash. If the queue is full, i.e. all the frames are full, we remove a node from the rear of the queue, and add the new node to the front of the queue.
Example – Consider the following reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
Find the number of page faults using the least recently used (LRU) page replacement algorithm with 3-page frames.
Below is the illustration of the above approach:
Note: Initially no page is in the memory.
Follow the below steps to solve the problem:
- Create a class LRUCache with declare a list of type int, an unordered map of type <int, list<int>>, and a variable to store the maximum size of the cache
- In the refer function of LRUCache
- If this value is not present in the queue then push this value in front of the queue and remove the last value if the queue is full
- If the value is already present then remove it from the queue and push it in the front of the queue
- In the display function print, the LRUCache using the queue starting from the front
Below is the implementation of the above approach:
C++
// We can use stl container list as a double // ended queue to store the cache keys, with // the descending time of reference from front // to back and a set container to check presence // of a key. But to fetch the address of the key // in the list using find(), it takes O(N) time. // This can be optimized by storing a reference // (iterator) to each key in a hash map. #include <bits/stdc++.h> using namespace std; class LRUCache { // store keys of cache list< int > dq; // store references of key in cache unordered_map< int , list< int >::iterator> ma; int csize; // maximum capacity of cache public : LRUCache( int ); void refer( int ); void display(); }; // Declare the size LRUCache::LRUCache( int n) { csize = n; } // Refers key x with in the LRU cache void LRUCache::refer( int x) { // not present in cache if (ma.find(x) == ma.end()) { // cache is full if (dq.size() == csize) { // delete least recently used element int last = dq.back(); // Pops the last element dq.pop_back(); // Erase the last ma.erase(last); } } // present in cache else dq.erase(ma[x]); // update reference dq.push_front(x); ma[x] = dq.begin(); } // Function to display contents of cache void LRUCache::display() { // Iterate in the deque and print // all the elements in it for ( auto it = dq.begin(); it != dq.end(); it++) cout << (*it) << " " ; cout << endl; } // Driver Code int main() { LRUCache ca(4); ca.refer(1); ca.refer(2); ca.refer(3); ca.refer(1); ca.refer(4); ca.refer(5); ca.display(); return 0; } // This code is contributed by Satish Srinivas |
C
// A C program to show implementation of LRU cache #include <stdio.h> #include <stdlib.h> // A Queue Node (Queue is implemented using Doubly Linked // List) typedef struct QNode { struct QNode *prev, *next; unsigned pageNumber; // the page number stored in this QNode } QNode; // A Queue (A FIFO collection of Queue Nodes) typedef struct Queue { unsigned count; // Number of filled frames unsigned numberOfFrames; // total number of frames QNode *front, *rear; } Queue; // A hash (Collection of pointers to Queue Nodes) typedef struct Hash { int capacity; // how many pages can be there QNode** array; // an array of queue nodes } Hash; // A utility function to create a new Queue Node. The queue // Node will store the given 'pageNumber' QNode* newQNode(unsigned pageNumber) { // Allocate memory and assign 'pageNumber' QNode* temp = (QNode*) malloc ( sizeof (QNode)); temp->pageNumber = pageNumber; // Initialize prev and next as NULL temp->prev = temp->next = NULL; return temp; } // A utility function to create an empty Queue. // The queue can have at most 'numberOfFrames' nodes Queue* createQueue( int numberOfFrames) { Queue* queue = (Queue*) malloc ( sizeof (Queue)); // The queue is empty queue->count = 0; queue->front = queue->rear = NULL; // Number of frames that can be stored in memory queue->numberOfFrames = numberOfFrames; return queue; } // A utility function to create an empty Hash of given // capacity Hash* createHash( int capacity) { // Allocate memory for hash Hash* hash = (Hash*) malloc ( sizeof (Hash)); hash->capacity = capacity; // Create an array of pointers for referring queue nodes hash->array = (QNode**) malloc (hash->capacity * sizeof (QNode*)); // Initialize all hash entries as empty int i; for (i = 0; i < hash->capacity; ++i) hash->array[i] = NULL; return hash; } // A function to check if there is slot available in memory int AreAllFramesFull(Queue* queue) { return queue->count == queue->numberOfFrames; } // A utility function to check if queue is empty int isQueueEmpty(Queue* queue) { return queue->rear == NULL; } // A utility function to delete a frame from queue void deQueue(Queue* queue) { if (isQueueEmpty(queue)) return ; // If this is the only node in list, then change front if (queue->front == queue->rear) queue->front = NULL; // Change rear and remove the previous rear QNode* temp = queue->rear; queue->rear = queue->rear->prev; if (queue->rear) queue->rear->next = NULL; free (temp); // decrement the number of full frames by 1 queue->count--; } // A function to add a page with given 'pageNumber' to both // queue and hash void Enqueue(Queue* queue, Hash* hash, unsigned pageNumber) { // If all frames are full, remove the page at the rear if (AreAllFramesFull(queue)) { // remove page from hash hash->array[queue->rear->pageNumber] = NULL; deQueue(queue); } // Create a new node with given page number, // And add the new node to the front of queue QNode* temp = newQNode(pageNumber); temp->next = queue->front; // If queue is empty, change both front and rear // pointers if (isQueueEmpty(queue)) queue->rear = queue->front = temp; else // Else change the front { queue->front->prev = temp; queue->front = temp; } // Add page entry to hash also hash->array[pageNumber] = temp; // increment number of full frames queue->count++; } // This function is called when a page with given // 'pageNumber' is referenced from cache (or memory). There // are two cases: // 1. Frame is not there in memory, we bring it in memory // and add to the front of queue // 2. Frame is there in memory, we move the frame to front // of queue void ReferencePage(Queue* queue, Hash* hash, unsigned pageNumber) { QNode* reqPage = hash->array[pageNumber]; // the page is not in cache, bring it if (reqPage == NULL) Enqueue(queue, hash, pageNumber); // page is there and not at front, change pointer else if (reqPage != queue->front) { // Unlink rquested page from its current location // in queue. reqPage->prev->next = reqPage->next; if (reqPage->next) reqPage->next->prev = reqPage->prev; // If the requested page is rear, then change rear // as this node will be moved to front if (reqPage == queue->rear) { queue->rear = reqPage->prev; queue->rear->next = NULL; } // Put the requested page before current front reqPage->next = queue->front; reqPage->prev = NULL; // Change prev of current front reqPage->next->prev = reqPage; // Change front to the requested page queue->front = reqPage; } } // Driver code int main() { // Let cache can hold 4 pages Queue* q = createQueue(4); // Let 10 different pages can be requested (pages to be // referenced are numbered from 0 to 9 Hash* hash = createHash(10); // Let us refer pages 1, 2, 3, 1, 4, 5 ReferencePage(q, hash, 1); ReferencePage(q, hash, 2); ReferencePage(q, hash, 3); ReferencePage(q, hash, 1); ReferencePage(q, hash, 4); ReferencePage(q, hash, 5); // Let us print cache frames after the above referenced // pages printf ( "%d " , q->front->pageNumber); printf ( "%d " , q->front->next->pageNumber); printf ( "%d " , q->front->next->next->pageNumber); printf ( "%d " , q->front->next->next->next->pageNumber); return 0; } |
Java
/* We can use Java inbuilt Deque as a double ended queue to store the cache keys, with the descending time of reference from front to back and a set container to check presence of a key. But remove a key from the Deque using remove(), it takes O(N) time. This can be optimized by storing a reference (iterator) to each key in a hash map. */ import java.util.Deque; import java.util.HashSet; import java.util.Iterator; import java.util.LinkedList; public class LRUCache { // store keys of cache private Deque<Integer> doublyQueue; // store references of key in cache private HashSet<Integer> hashSet; // maximum capacity of cache private final int CACHE_SIZE; LRUCache( int capacity) { doublyQueue = new LinkedList<>(); hashSet = new HashSet<>(); CACHE_SIZE = capacity; } /* Refer the page within the LRU cache */ public void refer( int page) { if (!hashSet.contains(page)) { if (doublyQueue.size() == CACHE_SIZE) { int last = doublyQueue.removeLast(); hashSet.remove(last); } } else { /* The found page may not be always the last element, even if it's an intermediate element that needs to be removed and added to the start of the Queue */ doublyQueue.remove(page); } doublyQueue.push(page); hashSet.add(page); } // display contents of cache public void display() { Iterator<Integer> itr = doublyQueue.iterator(); while (itr.hasNext()) { System.out.print(itr.next() + " " ); } } // Driver code public static void main(String[] args) { LRUCache cache = new LRUCache( 4 ); cache.refer( 1 ); cache.refer( 2 ); cache.refer( 3 ); cache.refer( 1 ); cache.refer( 4 ); cache.refer( 5 ); cache.display(); } } // This code is contributed by Niraj Kumar |
Python3
# We can use stl container list as a double # ended queue to store the cache keys, with # the descending time of reference from front # to back and a set container to check presence # of a key. But to fetch the address of the key # in the list using find(), it takes O(N) time. # This can be optimized by storing a reference # (iterator) to each key in a hash map. class LRUCache: # store keys of cache def __init__( self , n): self .csize = n self .dq = [] self .ma = {} # Refers key x with in the LRU cache def refer( self , x): # not present in cache if x not in self .ma.keys(): # cache is full if len ( self .dq) = = self .csize: # delete least recently used element last = self .dq[ - 1 ] # Pops the last element ele = self .dq.pop(); # Erase the last del self .ma[last] # present in cache else : del self .dq[ self .ma[x]] # update reference self .dq.insert( 0 , x) self .ma[x] = 0 ; # Function to display contents of cache def display( self ): # Iterate in the deque and print # all the elements in it print ( self .dq) # Driver Code ca = LRUCache( 4 ) ca.refer( 1 ) ca.refer( 2 ) ca.refer( 3 ) ca.refer( 1 ) ca.refer( 4 ) ca.refer( 5 ) ca.display() # This code is contributed by Satish Srinivas |
C#
// C# program to implement the approach using System; using System.Collections.Generic; class LRUCache { // store keys of cache private List< int > doublyQueue; // store references of key in cache private HashSet< int > hashSet; // maximum capacity of cache private int CACHE_SIZE; public LRUCache( int capacity) { doublyQueue = new List< int >(); hashSet = new HashSet< int >(); CACHE_SIZE = capacity; } /* Refer the page within the LRU cache */ public void Refer( int page) { if (!hashSet.Contains(page)) { if (doublyQueue.Count == CACHE_SIZE) { int last = doublyQueue[doublyQueue.Count - 1]; doublyQueue.RemoveAt(doublyQueue.Count - 1); hashSet.Remove(last); } } else { /* The found page may not be always the last element, even if it's an intermediate element that needs to be removed and added to the start of the Queue */ doublyQueue.Remove(page); } doublyQueue.Insert(0, page); hashSet.Add(page); } // display contents of cache public void Display() { foreach ( int page in doublyQueue) { Console.Write(page + " " ); } } // Driver code static void Main( string [] args) { LRUCache cache = new LRUCache(4); cache.Refer(1); cache.Refer(2); cache.Refer(3); cache.Refer(1); cache.Refer(4); cache.Refer(5); cache.Display(); } } // This code is contributed by phasing17 |
Javascript
// JS code to implement the approach class LRUCache { constructor(n) { this .csize = n; this .dq = []; this .ma = new Map(); } refer(x) { if (! this .ma.has(x)) { if ( this .dq.length === this .csize) { const last = this .dq[ this .dq.length - 1]; this .dq.pop(); this .ma. delete (last); } } else { this .dq.splice( this .dq.indexOf(x), 1); } this .dq.unshift(x); this .ma.set(x, 0); } display() { console.log( this .dq); } } const cache = new LRUCache(4); cache.refer(1); cache.refer(2); cache.refer(3); cache.refer(1); cache.refer(4); cache.refer(5); cache.display(); // This code is contributed by phasing17 |
5 4 1 3
Time Complexity: The time complexity of the refer() function is O(1) as it does a constant amount of work.
Auxiliary Space: The space complexity of the LRU cache is O(n), where n is the maximum size of the cache.
Java Implementation using LinkedHashMap.
Approach: The idea is to use a LinkedHashSet that maintains the insertion order of elements. This way implementation becomes short and easy.
Below is the implementation of the above approach:
C++
#include <iostream> #include <list> #include <unordered_map> using namespace std; class LRUCache { private : int capacity; list< int > cache; unordered_map< int , list< int >::iterator> map; public : LRUCache( int capacity) : capacity(capacity) { } // This function returns false if key is not // present in cache. Else it moves the key to // front by first removing it and then adding // it, and returns true. bool get( int key) { auto it = map.find(key); if (it == map.end()) { return false ; } cache.splice(cache.end(), cache, it->second); return true ; } void refer( int key) { if (get(key)) { return ; } put(key); } // displays contents of cache in Reverse Order void display() { for ( auto it = cache.rbegin(); it != cache.rend(); ++it) { // The descendingIterator() method of // java.util.LinkedList class is used to return an // iterator over the elements in this LinkedList in // reverse sequential order cout << *it << " " ; } } void put( int key) { if (cache.size() == capacity) { int first_key = cache.front(); cache.pop_front(); map.erase(first_key); } cache.push_back(key); map[key] = --cache.end(); } }; int main() { LRUCache cache(4); cache.refer(1); cache.refer(2); cache.refer(3); cache.refer(1); cache.refer(4); cache.refer(5); cache.display(); return 0; } // This code is contributed by divyansh2212 |
Java
// Java program to implement LRU cache // using LinkedHashSet import java.util.*; class LRUCache { Set<Integer> cache; int capacity; public LRUCache( int capacity) { this .cache = new LinkedHashSet<Integer>(capacity); this .capacity = capacity; } // This function returns false if key is not // present in cache. Else it moves the key to // front by first removing it and then adding // it, and returns true. public boolean get( int key) { if (!cache.contains(key)) return false ; cache.remove(key); cache.add(key); return true ; } /* Refers key x with in the LRU cache */ public void refer( int key) { if (get(key) == false ) put(key); } // displays contents of cache in Reverse Order public void display() { LinkedList<Integer> list = new LinkedList<>(cache); // The descendingIterator() method of // java.util.LinkedList class is used to return an // iterator over the elements in this LinkedList in // reverse sequential order Iterator<Integer> itr = list.descendingIterator(); while (itr.hasNext()) System.out.print(itr.next() + " " ); } public void put( int key) { if (cache.size() == capacity) { int firstKey = cache.iterator().next(); cache.remove(firstKey); } cache.add(key); } // Driver code public static void main(String[] args) { LRUCache ca = new LRUCache( 4 ); ca.refer( 1 ); ca.refer( 2 ); ca.refer( 3 ); ca.refer( 1 ); ca.refer( 4 ); ca.refer( 5 ); ca.display(); } } |
Python3
from typing import List , Dict class LRUCache: def __init__( self , capacity: int ): self .capacity = capacity self .cache = [] self . map = {} # This function returns false if key is not # present in cache. Else it moves the key to # front by first removing it and then adding # it, and returns true. def get( self , key: int ) - > bool : if key not in self . map : return False self .cache.remove(key) self .cache.append(key) return True def refer( self , key: int ) - > None : if self .get(key): return self .put(key) # displays contents of cache in Reverse Order def display( self ) - > None : for i in range ( len ( self .cache) - 1 , - 1 , - 1 ): print ( self .cache[i], end = " " ) def put( self , key: int ) - > None : if len ( self .cache) = = self .capacity: first_key = self .cache.pop( 0 ) del self . map [first_key] self .cache.append(key) self . map [key] = len ( self .cache) - 1 if __name__ = = '__main__' : cache = LRUCache( 4 ) cache.refer( 1 ) cache.refer( 2 ) cache.refer( 3 ) cache.refer( 1 ) cache.refer( 4 ) cache.refer( 5 ) cache.display() |
Javascript
// JavaScript program to implement LRU cache // using Set and LinkedList class LRUCache { constructor(capacity) { this .cache = new Set(); this .capacity = capacity; } // This function returns false if key is not // present in cache. Else it moves the key to // front by first removing it and then adding // it, and returns true. get(key) { if (! this .cache.has(key)) { return false ; } this .cache. delete (key); this .cache.add(key); return true ; } /* Refers key x with in the LRU cache */ refer(key) { if (! this .get(key)) { this .put(key); } } // displays contents of cache in Reverse Order display() { const list = [... this .cache]; // The reverse() method of // Array class is used to reverse the elements in the array list.reverse(); let ans= "" ; for (const key of list) { ans = ans +key + " " ; } console.log(ans); } put(key) { if ( this .cache.size === this .capacity) { const firstKey = this .cache.values().next().value; this .cache. delete (firstKey); } this .cache.add(key); } } // Driver code const ca = new LRUCache(4); ca.refer(1); ca.refer(2); ca.refer(3); ca.refer(1); ca.refer(4); ca.refer(5); ca.display(); |
C#
using System; using System.Collections.Generic; public class LRUCache { private int capacity; private List< int > cache; private Dictionary< int , int > map; public LRUCache( int capacity) { this .capacity = capacity; this .cache = new List< int >(); this .map = new Dictionary< int , int >(); } // This function returns false if key is not // present in cache. Else it moves the key to // front by first removing it and then adding // it, and returns true. public bool Get( int key) { if (! this .map.ContainsKey(key)) { return false ; } this .cache.Remove(key); this .cache.Add(key); return true ; } public void Refer( int key) { if ( this .Get(key)) { return ; } this .Put(key); } // Displays contents of cache in Reverse Order public void Display() { for ( int i = this .cache.Count - 1; i >= 0; i--) { Console.Write( this .cache[i] + " " ); } } public void Put( int key) { if ( this .cache.Count == this .capacity) { int firstKey = this .cache[0]; this .cache.RemoveAt(0); this .map.Remove(firstKey); } this .cache.Add(key); this .map[key] = this .cache.Count - 1; } } class Program { static void Main( string [] args) { LRUCache cache = new LRUCache(4); cache.Refer(1); cache.Refer(2); cache.Refer(3); cache.Refer(1); cache.Refer(4); cache.Refer(5); cache.Display(); } } //THIS CODE IS CONTRIBUTED BY NARASINGANIKHIL |
5 4 1 3
Time Complexity: O(1), we use a Linked HashSet data structure to implement the cache. The Linked HashSet provides constant time complexity for both adding elements and retrieving elements.
Auxiliary Space: O(n), we need to store n elements in the cache, so the space complexity is O(n).
Related Article:
Python implementation using OrderedDict
This article is compiled by Aashish Barnwal and reviewed by the GeeksforGeeks team. Please write comments if you find anything incorrect, or if you want to share more information about the topic discussed above.
Please Login to comment...