Open In App

Difference between Direct-mapping, Associative Mapping & Set-Associative Mapping

Last Updated : 07 Jun, 2023
Improve
Improve
Like Article
Like
Save
Share
Report

Cache: The small section of SRAM memory, added between main memory and processor(CPU) to speed up the process of execution, is known as cache memory. It includes a small amount of SRAM & more amount of DRAM. It is a high speed & expensive memory.

Cache hit ratio: It measures how effectively the cache fulfills the request for getting content.

Cache hit ratio = No of cache hits/ (No of cache hits + No. of cache Miss)

If data has been found in the cache, it is a cache hit else a cache miss.

Prerequisite – Cache mapping Types –  Direct-mapping,  Associative Mapping & Set-Associative Mapping

Cache Mapping:

The process /technique of bringing data of main memory blocks into the cache block is known as cache mapping.
The mapping techniques can be classified as :

  1. Direct Mapping
  2. Associative
  3. Set-Associative

1. Direct Mapping: Each block from main memory has only one possible place in the cache organization in this technique. 
For example : every block i of the main memory can be mapped to block j of the cache using the formula : 

j = i modulo m
Where : i = main memory block number
       j = cache block number
       m = number of blocks in the cache

The address here is divided into 3 fields : Tag, Block & Word.

To map the memory address to cache: The BLOCK field of the address is used to access the cache’s BLOCK. Then, the tag bits in the address is compared with the tag of the block. For a match, a cache hit occurs as the required word is found in the cache. Otherwise, a cache miss occurs and the required word has to be brought into the cache from the Main Memory. The word is now stored in the cache together with the new tag (old tag is replaced).

Example: If we have a fully associative mapped cache of 8 KB size with block size = 128 bytes and say, the size of main memory is = 64 KB. (Assuming word size = 1 byte) Then :

Number of bits for the physical address = 16 bits (as memory size = 64 KB = 26 × 210 = 216)
Number of bits for WORD = 7 bits (as block size = 128 bytes = 27)
No of Index bits = 13 bits (as cache size = 8 KB = 23 × 210 = 213)
No of BLOCK bits = Number of Index bits- Number of bits for WORD = 13 – 7 = 6bits

                                                                    OR
(No of cache Blocks = Cache size/block size = 8 KB / 128 Bytes = 8×1024 Bytes/128 Bytes = 26 blocks → 6bits)
No of TAG bits = Number of bits for the physical address — Number of bits in Index = 16-13 = 3 bits

2. Associative Mapping: Here the mapping of the main memory block can be done with any of the cache block. The memory address has only 2 fields here: word & tag. This technique is called as fully associative cache mapping.

Example: If we have a fully associative mapped cache of 8 KB size with block size = 128 bytes and say, the size of main memory is = 64 KB.  Then:

Number of bits for the physical address = 16 bits (as memory size = 64 KB = 26 × 210 = 216)
Number of bits in block offset = 7 bits (as block size = 128 bytes = 27)
No of tag bits = Number of bits for the physical address – Number of bits in block offset = 16-7 = 9 bits
No of cache Blocks = Cache size/block size = 8 KB / 128 Bytes = 8×1024 Bytes/128 Bytes = 26 blocks.

3. Set – Associative Mapping: It is the combination of advantages of both direct & associative mapping. 
Here, the cache consists of a number sets, each of which consists of a number of blocks. The relationships are :

n = w * L
i = j modulo w
where
i : cache set number
j : main memory block number
n : number of blocks in the cache
w : number of sets
L : number of lines in each set

This is referred to as L-way set-associative mapping. Block Bj can be translated into any of the blocks in set j using this mapping.

To map the memory address to cache: Using set field in the memory address, we access the particular set of the cache. Then, the  tag bits in the address are compared with the tag of all L blocks within that set.  For a match, a cache hit occur as the required word is found in the cache. Otherwise,  a cache miss occurs and the required word has to be brought in the cache from the Main Memory. According to the replacement policy used, a replacement is done if the cache is full.

Example: If we have a fully associative mapped cache of 8 KB size with block size = 128 bytes and say, the size of main memory is = 64 KB, and we have “2-way” set-associative mapping (Assume each word has 8 bits).   Then :

Number of bits for the physical address = 16 bits (as memory size = 64 KB = 26 * 210 = 216)
No of cache Blocks = Cache size/block size = 8 KB / 128 Bytes = 8×1024 Bytes/128 Bytes = 26 cache blocks.
No of Main Memory Blocks = MM size/block size = 64 KB / 128 Bytes = 64×1024 Bytes/128 Bytes = 29 MM blocks.
No of sets of size 2 = No of Cache Blocks/ L = 26/2 = 25 cache sets.(L = 2 as it is 2-way set associative mapping)

Difference between Direct-mapping, Associative Mapping & Set-Associative Mapping:

Direct-mapping Associative Mapping Set-Associative Mapping
Needs only one comparison because of using direct formula to get the effective cache address. Needs comparison with all tag bits, i.e., the cache control logic must examine every block’s tag for a match at the same time in order to determine that a block is in the cache/not. Needs comparisons equal to number of blocks per set as the set can contain more than 1 blocks. 
Main Memory Address is divided into 3 fields : TAG, BLOCK & WORD. The BLOCK & WORD together make an index. The least significant WORD bits identify a unique word within a block of main memory, the BLOCK bits specify one of the blocks and the Tag bits are the most significant bits. Main Memory Address is divided into 1 fields : TAG & WORD. Main Memory Address is divided into 3 fields : TAG, SET & WORD.
There is one possible location in the cache organization for each block from main memory because we have a fixed formula. The mapping of the main memory block can be done with any of the cache block. The mapping of the main memory block can be done with a particular cache block of any direct-mapped cache.
If the processor need to access same memory location from 2 different main memory pages frequently, cache hit ratio decreases. If the processor need to access same memory location from 2 different main memory pages frequently, cache hit ratio has no effect. In case of frequently accessing two different pages of the main memory if reduced, the cache hit ratio reduces.
Search time is less here because there is one possible location in the cache organization for each block from main memory. Search time is more as the cache control logic examines every block’s tag for a match. Search time increases with number of blocks per set.
The index is given by the number of blocks in cache. The index is zero for associative mapping. The index is given by the number of sets in cache.
It has least number of tag bits. It has the greatest number of tag bits. It has less tags bits than associative mapping and more tag bits than direct mapping.

Advantages-

  • Simplest type of mapping
  • Fast as only tag field matching is required while searching for a word.
  • It is comparatively less expensive than associative mapping.

Advantages-

  • It is fast.
  • Easy to implement

Advantages-

  • It gives better performance than the direct and associative mapping techniques.

Disadvantages-

  • It gives low performance because of the replacement for data-tag value.

Disadvantages-

  • Expensive because it needs to store address along with the data.

Disadvantages-

  • It is most expensive as with the increase in set size cost also increases.


Like Article
Suggest improvement
Previous
Next
Share your thoughts in the comments

Similar Reads