Open In App

Cache Memory Design

Last Updated : 12 May, 2023
Improve
Improve
Like Article
Like
Save
Share
Report

Prerequisite – Cache Memory A detailed discussion of the cache style is given in this article. The key elements are concisely summarized here. we are going to see that similar style problems should be self-addressed in addressing storage and cache style. They represent the subsequent categories: Cache size, Block size, Mapping function, Replacement algorithm, and Write policy. These are explained as following below. 
 
 

  1. Cache Size: It seems that moderately tiny caches will have a big impact on performance.
  2. Block Size: Block size is the unit of information changed between cache and main memory. As the block size will increase from terribly tiny to larger sizes, the hit magnitude relation can initially increase as a result of the principle of locality.the high chance that knowledge within the neck of the woods of a documented word square measure possible to be documented within the close to future. As the block size increases, a lot of helpful knowledge square measure brought into the cache. The hit magnitude relation can begin to decrease, however, because the block becomes even larger and also the chance of victimization the new fetched knowledge becomes but the chance of reusing the information that ought to be abstracted of the cache to form area for the new block.
  3. Mapping Function: When a replacement block of data is scan into the cache, the mapping performs determines that cache location the block will occupy. Two constraints have an effect on the planning of the mapping perform. First, once one block is scan in, another could be replaced. We would wish to do that in such the simplest way to minimize the chance that we are going to replace a block which will be required within the close to future. A lot of versatile the mapping perform, a lot of scopes we’ve to style a replacement algorithmic rule to maximize the hit magnitude relation. Second, a lot of versatile the mapping perform, a lot of advanced is that the electronic equipment needed to look the cache to see if a given block is within the cache.
  4. Replacement Algorithm: The replacement algorithmic rule chooses, at intervals, the constraints of the mapping perform, which block to interchange once a replacement block is to be loaded into the cache and also the cache already has all slots full of alternative blocks. We would wish to replace the block that’s least possible to be required once more within the close to future. Although it’s impossible to spot such a block, a fairly effective strategy is to interchange the block that has been within the cache longest with no relevance. This policy is spoken because of the least-recently-used (LRU) algorithmic rule. Hardware mechanisms square measure required to spot the least-recently-used block
  5. Write Policy: If the contents of a block within the cache square measure altered, then it’s necessary to write down it back to main memory before exchange it. The written policy dictates once the memory write operation takes place. At one extreme, the writing will occur whenever the block is updated. At the opposite extreme, the writing happens only if the block is replaced. The latter policy minimizes memory write operations however leaves the main memory in associate obsolete state. This can interfere with the multiple-processor operation and with direct operation by I/O hardware modules.

Advantages of Cache Memory Design:

  • Faster Access Time: Cache memory is designed to provide faster access to frequently accessed data. It stores a copy of data that is frequently accessed from the main memory, allowing the CPU to retrieve it quickly. This results in reduced access latency and improved overall system performance.
  • Reduced Memory Latency: Cache memory sits closer to the CPU compared to the main memory. As a result, accessing data from the cache has lower latency compared to accessing data from the main memory. This helps in reducing the memory access time and improves the efficiency of the system.
  • Improved System Performance: By reducing the memory access time and providing faster access to frequently used data, cache memory significantly enhances the overall performance of the system. It helps in reducing CPU idle time, improving instruction execution speed, and increasing the throughput of the system.

Disadvantages of Cache Memory Design:

  • Limited Capacity: Cache memory has limited capacity compared to the main memory. It is designed to store a subset of frequently used data. As a result, it may not be able to accommodate all the data needed by the CPU. Cache capacity limitations can lead to cache misses, where the required data is not found in the cache, resulting in slower memory access from the main memory.
  • Increased Complexity: Cache memory adds complexity to the overall system design. It requires sophisticated algorithms and hardware mechanisms for cache management, including cache replacement policies, coherence protocols, and cache consistency maintenance. Managing cache coherence and maintaining data consistency between cache and main memory can be challenging in multiprocessor systems.
  • Cache Consistency Issues: In multiprocessor systems, cache coherence becomes a critical issue. When multiple processors have their own caches, ensuring the consistency of data across caches can be complex. Cache coherence protocols are required to ensure that all processors observe a consistent view of memory. Implementing cache coherence protocols adds complexity and can introduce additional overhead.

Similar Reads

Locality of Reference and Cache Operation in Cache Memory
Locality of reference refers to a phenomenon in which a computer program tends to access same set of memory locations for a particular time period. In other words, Locality of Reference refers to the tendency of the computer program to access instructions whose addresses are near one another. The property of locality of reference is mainly shown by
4 min read
Difference between Virtual memory and Cache memory
Cache Memory: Cache memory increases the accessing speed of CPU. It is not a technique but a memory unit i.e a storage device. In cache memory, recently used data is copied. Whenever the program is ready to be executed, it is fetched from main memory and then copied to the cache memory. But, if its copy is already present in the cache memory then t
2 min read
Concept of Cache Memory Design
Cache Memory plays a significant role in reducing the processing time of a program by provide swift access to data/instructions. Cache memory is small and fast while the main memory is big and slow. The concept of caching is explained below. Caching Principle : The intent of cache memory is to provide the fastest access to resources without comprom
4 min read
Cache Memory in Computer Organization
Pre-Requisite: Computer Memory Cache Memory is a special very high-speed memory. The cache is a smaller and faster memory that stores copies of the data from frequently used main memory locations. There are various different independent caches in a CPU, which store instructions and data. The most important use of cache memory is that it is used to
7 min read
Differences between Associative and Cache Memory
1. Associative Memory: The time required to find an object stored in memory can be significantly reduced if the stored data can be identified by the content of the data for its own use rather than by access. A memory unit accessed by a material is known as an associative memory or a content addressable memory (CAM). This type of memory is accessed
2 min read
Cache Hits in Memory Organization
The user has a memory machine. It has one layer for data storage and another layer for the cache. The user has stored an array with length N in the first layer. When the CPU needs data, it immediately checks in cache memory whether it has data or not. If data is present it results in CACHE HITS, else CACHE MISS, i.e., data is not in cache memory so
8 min read
Terminologies Cache Memory Organization
Cache Memory is a small, fast memory that holds a fraction of the overall contents of the memory. Its mathematical model is defined by its size, number of sets, associativity, block size, sub-block size, fetch strategy, and write strategy. Any node in the cache hierarchy can contain a common cache or two separate caches for instruction and or data.
4 min read
Factors affecting Cache Memory Performance
Computers are made of three primary blocs. A CPU, a memory, and an I/O system. The performance of a computer system is very much dependent on the speed with which the CPU can fetch instructions from the memory and write to the same memory. Computers are using cache memory to bridge the gap between the processor's ability to execute instructions and
5 min read
Cache Memory Performance
Types of Caches : L1 Cache : Cache built in the CPU itself is known as L1 or Level 1 cache. This type of cache holds most recent data so when, the data is required again so the microprocessor inspects this cache first so it does not need to go through main memory or Level 2 cache. The main significance behind above concept is "Locality of reference
5 min read
Difference Between Efficiency and Speedup in Cache Memory
Pre-requisites: Cache Memory Performance Cache memory can improve both the efficiency and speed of a computer by reducing the number of accesses to main memory and allowing the CPU to access data more quickly. EfficiencyCache memory is a type of high-speed memory that is built into a computer's central processing unit (CPU) or located close to it.
3 min read