Cache is a random access memory used by the CPU to reduce the average time taken to access memory.
Multilevel Caches is one of the techniques to improve Cache Performance by reducing the “MISS PENALTY”. Miss Penalty refers to the extra time required to bring the data into cache from the Main memory whenever there is a “miss” in cache .
For clear understanding let us consider an example where CPU requires 10 Memory References for accessing the desired information and consider this scenario in the following 3 cases of System design :
Case 1 : System Design without Cache Memory
Here the CPU directly communicates with the main memory and no caches are involved.
In this case, the CPU needs to access the main memory 10 times to access the desired information.
Case 2 : System Design with Cache Memory
Here the CPU at first checks whether the desired data is present in the Cache Memory or not i.e. whether there is a “hit” in cache or “miss” in cache. Suppose there are 3 miss in Cache Memory then the Main Memory will be accessed only 3 times . We can see that here the miss penalty is reduced because the Main Memory is accessed lesser number of times than that in the previous case.
Case 3 : System Design with Multilevel Cache Memory
Here the Cache performance is optimized further by introducing multilevel Caches. As shown in the above figure, we are considering 2 level Cache Design . Suppose there are 3 miss in the L1 Cache Memory and out of these 3 misses there are 2 miss in the L2 Cache Memory then the Main Memory will be accessed only 2 times . It is clear that here the Miss Penalty is reduced considerably than that in the previous case thereby improving the Performance of Cache Memory.
We can observe from the above 3 cases that we are trying to decrease the number of Main Memory References and thus decreasing the Miss Penalty in order to improve the overall System Performance. Also, it is important to note that in the Multilevel Cache Design, L1 Cache is attached to the CPU and it is small in size but fast. Although, L2 Cache is attached to the Primary Cache i.e. L1 Cache and it is larger in size and slower but still faster than the Main Memory.
- Clusters In Computer Organisation
- Computer Organisation | One bit memory cell
- Operating System | Multilevel Queue Scheduling
- Operating System | Multilevel Feedback Queue Scheduling
- LRU Cache Implementation
- What's difference between CPU Cache and TLB?
- Cache Organization | Set 1 (Introduction)
- Computer Organization | Cache Memory
- Design a data structure for LRU Cache
- Computer Network | DNS Spoofing or DNS Cache poisoning
- Computer Organization | Locality of Reference and Cache Operation
- Computer Organization | Locality and Cache friendly code
- DFA for strings not containing consecutive two a's and starting with 'a'
- Maximum Bitwise AND pair from given range
If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to email@example.com. See your article appearing on the GeeksforGeeks main page and help other Geeks.
Please Improve this article if you find anything incorrect by clicking on the "Improve Article" button below.