Open In App

Cache Memory Design

Improve
Improve
Improve
Like Article
Like
Save Article
Save
Share
Report issue
Report

Prerequisite – Cache Memory A detailed discussion of the cache style is given in this article. The key elements are concisely summarized here. we are going to see that similar style problems should be self-addressed in addressing storage and cache style. They represent the subsequent categories: Cache size, Block size, Mapping function, Replacement algorithm, and Write policy. These are explained as following below. 
 
 

  1. Cache Size: It seems that moderately tiny caches will have a big impact on performance.
  2. Block Size: Block size is the unit of information changed between cache and main memory. As the block size will increase from terribly tiny to larger sizes, the hit magnitude relation can initially increase as a result of the principle of locality.the high chance that knowledge within the neck of the woods of a documented word square measure possible to be documented within the close to future. As the block size increases, a lot of helpful knowledge square measure brought into the cache. The hit magnitude relation can begin to decrease, however, because the block becomes even larger and also the chance of victimization the new fetched knowledge becomes but the chance of reusing the information that ought to be abstracted of the cache to form area for the new block.
  3. Mapping Function: When a replacement block of data is scan into the cache, the mapping performs determines that cache location the block will occupy. Two constraints have an effect on the planning of the mapping perform. First, once one block is scan in, another could be replaced. We would wish to do that in such the simplest way to minimize the chance that we are going to replace a block which will be required within the close to future. A lot of versatile the mapping perform, a lot of scopes we’ve to style a replacement algorithmic rule to maximize the hit magnitude relation. Second, a lot of versatile the mapping perform, a lot of advanced is that the electronic equipment needed to look the cache to see if a given block is within the cache.
  4. Replacement Algorithm: The replacement algorithmic rule chooses, at intervals, the constraints of the mapping perform, which block to interchange once a replacement block is to be loaded into the cache and also the cache already has all slots full of alternative blocks. We would wish to replace the block that’s least possible to be required once more within the close to future. Although it’s impossible to spot such a block, a fairly effective strategy is to interchange the block that has been within the cache longest with no relevance. This policy is spoken because of the least-recently-used (LRU) algorithmic rule. Hardware mechanisms square measure required to spot the least-recently-used block
  5. Write Policy: If the contents of a block within the cache square measure altered, then it’s necessary to write down it back to main memory before exchange it. The written policy dictates once the memory write operation takes place. At one extreme, the writing will occur whenever the block is updated. At the opposite extreme, the writing happens only if the block is replaced. The latter policy minimizes memory write operations however leaves the main memory in associate obsolete state. This can interfere with the multiple-processor operation and with direct operation by I/O hardware modules.

Advantages of Cache Memory Design:

  • Faster Access Time: Cache memory is designed to provide faster access to frequently accessed data. It stores a copy of data that is frequently accessed from the main memory, allowing the CPU to retrieve it quickly. This results in reduced access latency and improved overall system performance.
  • Reduced Memory Latency: Cache memory sits closer to the CPU compared to the main memory. As a result, accessing data from the cache has lower latency compared to accessing data from the main memory. This helps in reducing the memory access time and improves the efficiency of the system.
  • Improved System Performance: By reducing the memory access time and providing faster access to frequently used data, cache memory significantly enhances the overall performance of the system. It helps in reducing CPU idle time, improving instruction execution speed, and increasing the throughput of the system.

Disadvantages of Cache Memory Design:

  • Limited Capacity: Cache memory has limited capacity compared to the main memory. It is designed to store a subset of frequently used data. As a result, it may not be able to accommodate all the data needed by the CPU. Cache capacity limitations can lead to cache misses, where the required data is not found in the cache, resulting in slower memory access from the main memory.
  • Increased Complexity: Cache memory adds complexity to the overall system design. It requires sophisticated algorithms and hardware mechanisms for cache management, including cache replacement policies, coherence protocols, and cache consistency maintenance. Managing cache coherence and maintaining data consistency between cache and main memory can be challenging in multiprocessor systems.
  • Cache Consistency Issues: In multiprocessor systems, cache coherence becomes a critical issue. When multiple processors have their own caches, ensuring the consistency of data across caches can be complex. Cache coherence protocols are required to ensure that all processors observe a consistent view of memory. Implementing cache coherence protocols adds complexity and can introduce additional overhead.

Last Updated : 12 May, 2023
Like Article
Save Article
Previous
Next
Share your thoughts in the comments
Similar Reads