How to implement LRU caching scheme? What data structures should be used?
We are given total possible page numbers that can be referred. We are also given cache (or memory) size (Number of page frames that cache can hold at a time). The LRU caching scheme is to remove the least recently used frame when the cache is full and a new page is referenced which is not there in cache. Please see the Galvin book for more details (see the LRU page replacement slide here).
We use two data structures to implement an LRU Cache.
- Queue which is implemented using a doubly linked list. The maximum size of the queue will be equal to the total number of frames available (cache size). The most recently used pages will be near front end and least recently pages will be near the rear end.
- A Hash with page number as key and address of the corresponding queue node as value.
When a page is referenced, the required page may be in the memory. If it is in the memory, we need to detach the node of the list and bring it to the front of the queue.
If the required page is not in memory, we bring that in memory. In simple words, we add a new node to the front of the queue and update the corresponding node address in the hash. If the queue is full, i.e. all the frames are full, we remove a node from the rear of the queue, and add the new node to the front of the queue.
Example – Consider the following reference string :
1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
Find the number of page faults using least recently used (LRU) page replacement algorithm with 3 page frames.
Note: Initially no page is in the memory.
C++ using STL
5 4 1 3
Java Implementation using LinkedHashMap.
The idea is to use a LinkedHashSet that maintains insertion order of elements. This way implementation becomes short and easy.
5 4 1 3
Python implementation using OrderedDict
This article is compiled by Aashish Barnwal and reviewed by GeeksforGeeks team. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.
Attention reader! Don’t stop learning now. Get hold of all the important DSA concepts with the DSA Self Paced Course at a student-friendly price and become industry ready.
- Locality of Reference and Cache Operation in Cache Memory
- Least Frequently Used (LFU) Cache Implementation
- Program for Least Recently Used (LRU) Page Replacement algorithm
- LRU Approximation (Second Chance Algorithm)
- How to Implement Reverse DNS Look Up Cache?
- How to Implement Forward DNS Look Up Cache?
- Cache Memory in Computer Organization
- DNS Spoofing or DNS Cache poisoning
- Computer Organization | Locality and Cache friendly code
- Multilevel Cache Organisation
- Difference between Virtual memory and Cache memory
- Cache Memory Design
- Write Through and Write Back in Cache
- Types of Cache Misses
- Differences between Associative and Cache Memory
- Concept of Cache Memory Design
- Difference between Buffer and Cache
- Difference between Cache Memory and Register
- Treap | Set 2 (Implementation of Search, Insert and Delete)
- Heavy Light Decomposition | Set 2 (Implementation)