Skip to content
Related Articles

Related Articles

Improve Article
Save Article
Like Article

Difference between Direct-mapping, Associative Mapping & Set-Associative Mapping

  • Difficulty Level : Medium
  • Last Updated : 17 May, 2021

Prerequisite – Cache mapping Types –  Direct-mapping,  Associative Mapping & Set-Associative Mapping

Cache : 
The small section of SRAM memory, added between main memory and processor(CPU) to speed up the process of execution, is known as cache memory. It includes small amount of SRAM & more amount of DRAM. It is a high speed & expensive memory.

Attention reader! Don’t stop learning now.  Practice GATE exam well before the actual exam with the subject-wise and overall quizzes available in GATE Test Series Course.

Learn all GATE CS concepts with Free Live Classes on our youtube channel.

Cache hit ratio : It is the measures how effectively cache fulfills the request for getting content.



Cache hit ratio = No of cache hits/ (No of cache hits + No. of cache Miss)

If data has been found in the cache, it is a cache hit else a cache miss.

Cache Mapping :
The process /technique of bringing data of main memory blocks into the cache block is known as cache mapping.
The mapping techniques can be classified as :

  1. Direct Mapping
  2. Associative
  3. Set-Associative

1. Direct Mapping : 
Each block from main memory has only one possible place in the cache organization in this technique. 
For example : every block i of the main memory can be mapped to block j of the cache using the formula : 

j = i modulo m
Where : i = main memory block number
       j = cache block number
       m = number of blocks in the cache

The address here is divided into 3 fields : Tag, Block & Word.

To map the memory address to cache
The BLOCK field of the address is used to access the cache’s BLOCK. Then, the  tag bits in the address is compared with the tag of the block. For a match, a cache hit occur as the required word is found in the cache. Otherwise, a cache miss occurs and the required word has to be brought in the cache from the Main Memory. The word is now stored in the cache together with the new tag (old tag is replaced).

Example – 
If we have a fully associative mapped cache of 8 KB size with block size = 128 bytes and say, the size of main memory is = 64 KB. (Assuming word size = 1 byte) Then :

Number of bits for the physical address = 16 bits (as memory size = 64 KB = 26 × 210 = 216)
Number of bits for WORD = 7 bits (as block size = 128 bytes = 27)
No of Index bits = 13 bits (as cache size = 8 KB = 23 × 210 = 213)
No of BLOCK bits = Number of Index bits- Number of bits for WORD = 13 – 7 = 6bits

                                                                    OR
(No of cache Blocks = Cache size/block size = 8 KB / 128 Bytes = 8×1024 Bytes/128 Bytes = 26 blocks → 6bits)
No of TAG bits = Number of bits for the physical address — Number of bits in Index = 16-13 = 3 bits



2. Associative Mapping :
Here the mapping of the main memory block can be done with any of the cache block. The memory address has only 2 fields here : word & tag. This technique is called as fully associative cache mapping.

Example – 
If we have a fully associative mapped cache of 8 KB size with block size = 128 bytes and say, the size of main memory is = 64 KB.  Then :
Number of bits for the physical address = 16 bits (as memory size = 64 KB = 26 × 210 = 216)
Number of bits in block offset = 7 bits (as block size = 128 bytes = 27)
No of tag bits = Number of bits for the physical address – Number of bits in block offset = 16-7 = 9 bits
No of cache Blocks = Cache size/block size = 8 KB / 128 Bytes = 8×1024 Bytes/128 Bytes = 26 blocks.

3. Set – Associative Mapping :
It is the combination of advantages of both direct & associative mapping. 
Here, the cache consists of a number  sets, each of which consists of a number of blocks. The relationships are :

n = w * L
i = j modulo w
where
i : cache set number
j : main memory block number
n : number of blocks in the cache
w : number of sets
L : number of lines in each set

This is referred to as L-way set-associative mapping. Block Bj can be translated into any of the blocks in set j using this mapping.

To map the memory address to cache –
Using set field in the memory address, we access the particular set of the cache. Then, the  tag bits in the address is compared with the tag of all L blocks within that set.  For a match, a cache hit occur as the required word is found in the cache. Otherwise,  a cache miss occurs and the required word has to be brought in the cache from the Main Memory. According to the replacement policy used, a replacement is done if the cache is full.

Example : If we have a fully associative mapped cache of 8 KB size with block size = 128 bytes and say, the size of main memory is = 64 KB, and we have “2-way” set-associative mapping (Assume each word has 8 bits).   Then :

Number of bits for the physical address = 16 bits (as memory size = 64 KB = 26 * 210 = 216)
No of cache Blocks = Cache size/block size = 8 KB / 128 Bytes = 8×1024 Bytes/128 Bytes = 26 cache blocks.
No of Main Memory Blocks = MM size/block size = 64 KB / 128 Bytes = 64×1024 Bytes/128 Bytes = 29 MM blocks.
No of sets of size 2 = No of Cache Blocks/ L = 26/2 = 25 cache sets.(L = 2 as it is 2-way set associative mapping)

Difference between Direct-mapping, Associative Mapping & Set-Associative Mapping :

 Direct-mappingAssociative MappingSet-Associative Mapping
1.Needs only one comparison because of using direct formula to get the effective cache address.Needs comparison with all tag bits, i.e., the cache control logic must examine every block’s tag for a match at the same time in order to determine that a block is in the cache/not.Needs comparisons equal to number of blocks per set as the set can contain more than 1 blocks. 
2.Main Memory Address is divided into 3 fields : TAG, BLOCK & WORD. The BLOCK & WORD together make an index. The least significant TAG bits identify a unique word within a block of main memory, the BLOCK bits specify one of the blocks and the Tag bits are the most significant bits.Main Memory Address is divided into 1 fields : TAG & WORD.Main Memory Address is divided into 3 fields : TAG, SET & WORD.
3.There is one possible location in the cache organization for each block from main memory because we have a fixed formula.The mapping of the main memory block can be done with any of the cache block.The mapping of the main memory block can be done with a particular cache block of any direct-mapped cache.
4.If the processor need to access same memory location from 2 different main memory pages frequently, cache hit ratio decreases.If the processor need to access same memory location from 2 different main memory pages frequently, cache hit ratio has no effect.In case of frequently accessing two different pages of the main memory if reduced, the cache hit ratio reduces.
5.Search time is less here because there is one possible location in the cache organization for each block from main memory.Search time is more as the cache control logic examines every block’s tag for a match.Search time increases with number of blocks per set.
My Personal Notes arrow_drop_up
Recommended Articles
Page :