0% found this document useful (0 votes)
54 views

34-Cache Memory Block Identification in Direct Mapping, Associate Mapping and Set Associate-06-03-2

The document discusses cache memory mapping techniques. It describes direct mapping where each block of main memory can only map to one specific line in the cache. The cache line is determined by taking the modulus of the main memory block address and the number of cache lines. An example is provided where a cache with 4 lines is mapped to main memory with 16 blocks, demonstrating that blocks 0, 4, 8, 12 map to cache line 0, and blocks 1, 5, 9, 13 map to cache line 1, following a direct mapping approach.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
54 views

34-Cache Memory Block Identification in Direct Mapping, Associate Mapping and Set Associate-06-03-2

The document discusses cache memory mapping techniques. It describes direct mapping where each block of main memory can only map to one specific line in the cache. The cache line is determined by taking the modulus of the main memory block address and the number of cache lines. An example is provided where a cache with 4 lines is mapped to main memory with 16 blocks, demonstrating that blocks 0, 4, 8, 12 map to cache line 0, and blocks 1, 5, 9, 13 map to cache line 1, following a direct mapping approach.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 36

Computer Architecture and

Organization
• Course Code: BCSE205L
• Course Type: Theory (ETH)
• Slot: A2+TA2
• Timings:
Monday 14:00-14:50
Wednesday 15:00-15:50
Friday 16:00-16:50

Dr. Venkata Phanikrishna B, SCOPE, VIT-Vellore


Module:4
Memory System Organization and Architecture
Cache Memory: Mapping and BLOCK
IDENTIFICATION

Dr. Venkata Phanikrishna B, SCOPE, VIT-Vellore


Dr. Venkata Phanikrishna B, SCOPE, VIT-Vellore
• A item found to be in the cache is said to correspond to a cache hit
• Items not currently in the cache are called cache misses
• The additional number of cycles required to serve the miss is called ‘miss
penalty’
𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑟𝑒𝑓𝑒𝑟𝑒𝑛𝑐𝑒𝑠 𝑓𝑜𝑢𝑛𝑑 𝑖𝑛 𝑡ℎ𝑒 𝑐𝑎𝑐ℎ𝑒
• 𝐻𝑖𝑡 𝑟𝑎𝑡𝑖𝑜 =
𝑇𝑜𝑡𝑎𝑙 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑚𝑒𝑚𝑜𝑟𝑦 𝑟𝑒𝑓𝑒𝑟𝑒𝑛𝑐𝑒𝑠
𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑟𝑒𝑓𝑒𝑟𝑒𝑛𝑐𝑒𝑠 𝑚𝑖𝑠𝑠𝑒𝑑 𝑓𝑟𝑜𝑚 𝑡ℎ𝑒 𝑐𝑎𝑐ℎ𝑒
• 𝑀𝑖𝑠𝑠 𝑟𝑎𝑡𝑖𝑜 =
𝑇𝑜𝑡𝑎𝑙 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑚𝑒𝑚𝑜𝑟𝑦 𝑟𝑒𝑓𝑒𝑟𝑒𝑛𝑐𝑒𝑠
• 𝐻𝑖𝑡 𝑟𝑎𝑡𝑖𝑜 + 𝑀𝑖𝑠𝑠 𝑟𝑎𝑡𝑖𝑜𝑛 = 1
Cache Memory Management Techniques
Direct Mapping

Block Placement Set Associative

Fully Associative

Tag

Block Identification Index

Offset

FCFS

Block Replacement LRU

Random

Write Through

Write back
Update Policies
Write around

Write allocate
Cache Memory Management Techniques

Block Identification Tag

Index

Offset
How to manage the cache and main memory collectively
(together).
• Paging is the management of the hard drive and main memory together.
• A page can fit into one frame when MM memory is broken up into units
known as frames. (Note: Pages are used to organise secondary memory.)
• In same way, the cache is divided into what are referred to as lines in order to
manage the main memory and cache. Here The main memory is organised
into units referred to as blocks.
• Now, each block size will be determined so that it is equal to the line size.
Note:
➢ It is possible to divide MM into blocks and frames.
➢ While talking about the HDD and Main memory we are taking about the page movement.
➢ While talking about the MM and cache we talk about blocks
Dr. Venkata Phanikrishna B, SCOPE, VIT-Vellore
How to manage the cache and main memory collectively
(Example).
Word: Smallest addressable unit in the memory.
Block: Collection of one or more words
Assume, the word size is 1Byte. That is 1word=1Byte. Which means this system is byte addressable.

Consider that the MM is 64 words. Assume that there are 16 words in the cache.
There are four words in each block.
1 Block=4 words.
How many blocks are there in the MM if one block is four words?
Blocks in MM = 64 words / 4 = 16 blocks.
Cache number line: 16/4 = 4 Lines
There are 64 words. Address lines or physical addresses are 6 bits since 2^(6)=64.
How to manage the cache and main memory collectively
(Example).
• Consider that the MM is 64 words.
Assume that there are 16 words in the
cache.
• There are four words in each block.
• 1 Block=4 words.
• How many blocks are there in the
MM if one block is four words?
• Blocks in MM = 64 words / 4 = 16
blocks.
• Cache number line: 16/4 = 4 Lines
• There are 64 words. Address lines or
physical addresses are 6 bits since
2^(6)=64.

What does it signify when the CPU creates an address?


To find the word it needs, the CPU constructs a 6 bit Physical Address (PA). Finding out if the word is in the cache or
not from this PA is the first step. If it isn't already in the cache, the CPU will get data from the main memory.
If there is a cache hit, the CPU will retrieve it from the cache. We move on to the MM to retrieve the cache if it is
missed.
How to manage the cache and main memory collectively
(Example).
• If the CPU produced something similar to
000101. What does the CPU actually
want?
• CPU requests the word "5".
• How CPU get the word five?
• MM is divided inform of blocks. So, how
CPU you acquire word number five?
• There are 16 blocks (2^4), each block
size is 4 words (2^2). Hence, Therefore
CPU generated address is divided into
two parts. Block number Block offset

000101➔ 0 0 0 1 01
How to manage the cache and main memory collectively
(Example).
Block number Block offset

000101➔ 0 0 0 1 01

The two least significant bits will give you a value known as
Block offset. Block offset refers to the term that is contained
within the block.

The four bits with the most significance will give you a block
number. Block number refers the block where the CPU-required
word is present.

Look into block number 1 because of 0001 and then block Here 01 means 1.
Words in the block start from 0. So 1 means the second word, that is 5.
How to manage the cache and main memory collectively
(Example).
Another example, 001010

0010 10

CPU required word is 10


From block address, 0010 ➔it is second block
From block offset, 10➔2, it third (because word
numbering start from zero) word in that block.
How to manage the cache and main memory collectively
When including the cache memory, then the process will include whether that
particular block looking for is present in the cache or not.
• Block offset is utilised to obtain the needed word if it is already in the cache.
• If it isn't there, obtain that specific block from MM and add it to the cache.

We are aware that MM has a larger capacity (higher number of blocks) than
cache memory. Naturally, it is not possible to put all the words from MM in a
cache.
Cache: Mappings (Cache Memory Management Techniques)
Mapping: Is a cache memory management techniques. Each memory block in the main
memory is mapped to a line in the cache is known as mapping.
• Mapping will answer the following things:
• What (which block) and how can then be stored in cache memory? This idea is known
as mapping.
• Where are the MM-blocks going to be maintained if you have to place them in the
cache?
• This means, which line of the cache does one block of MM holds?

There are various kinds of mappings proposed.


• Direct Mapping
• Associative or Fully Associative mapping
• Set Associative Mapping
Cache: Mappings (Cache Memory Management Techniques)

Cache Memory Management Techniques

Block Placement Direct Mapping

Set Associative

Fully Associative

Dr. Venkata Phanikrishna B, SCOPE, VIT-Vellore


Cache: Mappings (Cache Memory Management Techniques)
Direct mapping
• A particular block of main memory can map only to a particular line of the
cache.
• The line number of cache to which a particular block can map is given by-

Cache line number = ( Main Memory Block Address ) Modulo (Number of lines in Cache)

Division of Physical Address-


In direct mapping, the physical
address is divided as-

Dr. Venkata Phanikrishna B, SCOPE, VIT-Vellore


Cache: Mappings (Cache Memory Management Techniques)
Direct mapping
Cache line number = ( Main Memory Block Address ) Modulo (Number of lines in Cache)

Division of Physical Address-


In direct mapping, the physical
address is divided as-
Dr. Venkata Phanikrishna B, SCOPE, VIT-Vellore
Cache: Mappings (Cache Memory Management Techniques)
Direct mapping (Example Explanation)
• Caches of 0 to 3 lines (1 Block=4 words, 16 words in the cache, and
Cache number line: 16/4 = 4 Lines) and MM blocks of 0 to 15 (This is
because MM is 64 words, and 1 Block equals 4 words.) are examples
from the previous example.
• Caches Line 0 to 3 will first be filled with main memories 0 to 3
• after that, next time again MM's 4 to 7 blocks fill them once more from
Line 0 to 3.
• Like this, all blocks of MM’s are occupied in lines of cache.
• Finally, the MM's blocks 0, 4, 8, and 12 will map to line 0 of cache
memory.
• It is many to one functions.
Dr. Venkata Phanikrishna B, SCOPE, VIT-Vellore
Cache: Mappings (Cache Memory Management Techniques)
Direct mapping (Example
Explanation)
• If you ever want to get either 0 or
4 or 8 or 12 blocks they should
always be placed in line number 0
of the cache, nowhere else.

• Similarly, MM blocks 1, 5, 9, and


13 always be placed in line
number 1.

Dr. Venkata Phanikrishna B, SCOPE, VIT-Vellore


Cache: Mappings (Cache Memory Management Techniques)
Direct mapping (Example
Explanation)
If we examine the addresses from
CPU (From old example), they are 6
bits, and out of 6 bits Block numbers
come in 4 bits, and block offset comes
in 2 bits.

If we consider
block 4 bits then
we have the
following binary
pattern.

Dr. Venkata Phanikrishna B, SCOPE, VIT-Vellore


Cache: Mappings (Cache Memory Management Techniques)
Direct mapping If we consider block
(Example Explanation) 4 bits then we have
the following binary
pattern.

Blocks 0, 4, 8, and 12 get into line


number 0 of the cache. If you
observe the block 4bits least two
significant bits are 00. By this, we an
find the line number of each block.

Therefore, from 6 bits CPU address line, the first two least significant bits indicate the block offset (i.e.,
word location in block) and the Next four bits indicate the block address. In the block number field, the
first two least significant bits indicate the line number of the cache and the remaining two bits indicates
Dr. Venkata Phanikrishna B, SCOPE, VIT-Vellore
the tag directory.
Cache: Mappings (Cache Memory Management Techniques)
Direct mapping TAG field in Block Identification
From 6 bits CPU address line,
• The first two least significant bits indicate the block offset (i.e., word location in
block) and the
• Next four bits indicate the block address.
• In the block number field, the first two least significant bits indicate the line
number of the cache and
• The remaining two bits indicates the tag directory.

• Block address can be used to determine the block number in this case, and block offset is
used to determine where the word is located inside the block in MM.
• The first two LSB of the block number represent the line number of the cache, which
identifies the line number on which that word can be found there.
• What use does the TAG field, which is created from the remaining bits of the block
number, then serve?

Dr. Venkata Phanikrishna B, SCOPE, VIT-Vellore


Cache: Mappings (Cache Memory Management Techniques) : Direct mapping
Why do we require a tag field?
• Each line is divided into four blocks.
• As a result, one of four words could be contained inside a cache line
(from the considered example, line-0 could be any of one block from
block-0, 4, 8, and 10).
• The Tag 2 bits will therefore reveal which of the four is present.
The blocks in the tag that can be identified are as follows.

Dr. Venkata Phanikrishna B, SCOPE, VIT-Vellore


Cache: Mappings (Cache Memory Management Techniques) : Direct mapping
Direct mapping (Hit and Miss Example Explanation)

• Let us consider, the address generated by the CPU like this:


110101
• Then what the will CPU try to do is, it will first check in
the cache, whether that particular block is present or not.
• From the given address, it goes to the line number that is
middle 01 bits.
• It is 1, then it looks into line-1.
• And then it compares the tag with bits 11.
• Then the CPU will understand that in line number 01 (i.e.,
1 in Decimal), the required block is present therefore I get
into the second location and then it gets.

Dr. Venkata Phanikrishna B, SCOPE, VIT-Vellore


Cache Memory Management Techniques

Block Identification Tag

Index

Offset
Cache: Mappings (Cache Memory Management Techniques) : Direct mapping
Q: Given MM size=128KB and cache size=16KB, block size=256B then find tag bits and Tag directory size.
Tag bits in physical address. Here memory is byte addressable

Here memory is byte addressable, which means Every address should able to produce all the bytes present in that memory size.
From given 128KB, there are 128K Bytes are present.
Then we convert the 128K in 2^? to get the number of bits. 128K=27*210=217➔17bits
The physical address (PA) for 28KB MM size is =17bits According to the cache, this
PA will be represented as

Initially, According to the MM, This


PA will represent two things that is
block offset and block number.

PA=17bits, cache size is 16KB and block size is 256B


Block offset to represent the block size of 256B=2^8➔ 8bits

Dr. Venkata Phanikrishna B, SCOPE, VIT-Vellore


Cache: Mappings (Cache Memory Management Techniques) : Direct mapping

Q: Given MM size=128KB and cache size=16KB, block size=256B then find tag bits and Tag directory size.
Tag bits in physical address. Here memory is byte addressable

The physical address (PA) for 28KB MM size is =17bits

Block size is 256B, Block offset Number of lines or Line numbers= Total cache Totally bits are 17, except Line
to represent the block size of size/Block size or Line size = number 6 bits and Block offset 8bits
256B=2^8➔ 8bits 16KB/256B=24*210/28=26 remaining (17-(6+8)=3) 3 bits are
tag bits.
So 6bits are used to represent the line number.
Or
That size of MM/size of cache
=128KB/16KB=27*210/24*210=23
➔3 bits

Tag Directory size: each tag information is going to be 3 bits. And Every line is containing
tag. So tag directory size is 3*Number of lines=3*26bits.
Dr. Venkata Phanikrishna B, SCOPE, VIT-Vellore
Tag Index Offset

• Main Memory Size=2m


• Main Memory block Size=2n
• Cache Size=2k
• Number of block in Main Memory
=(2m)/(2n)=2m-n

• Number of lines in a Cache


=(2k)/(2n)=2k-n
Tag (m-k) Index(k-n) Offset (n)

• Candidates (main memory block) for each cache


line (Tag)
= 2m-n i.e 2m-k
2k-n
• Number of lines in a Cache (Index)
=(2k)/(2n)=2k-n
• Each cache line contains the same number of
bytes as in a memory block.
• Offset is =n ; Index=k-n ; tag =m-k
Cache: Mappings (Cache Memory Management Techniques) : Direct mapping
If memory block number is m and number of cache lines are n then mth block present in kth line of the cache.
K=m%n
We have m bit number which is block number.
Explanation
In this m bit numbers the least significant l bits are line
W.K.T physical address is directly divided into number bits
two parts Block number filed and Block offset
If this m bits are representing some number x. then the
least significant l bits then the line number where the
If block number consists of m bits, block is present is = x mod 2l
then number of blocks are 2m

In block number field, the least significant of


some bits (let as assume l bits) are used to
represent the line number in the cache.
Therefore, number of lines=2l

Which means if we take the MM block number and then divided


Dr. Venkata Phanikrishna B, SCOPE, VIT-Vellore
with 2l then the remainder is nothing but the line number.
Cache: Mappings (Cache Memory Management Techniques) : Direct mapping
If memory block number is m and number of cache lines are n then mth block present in kth line of the cache.
K=m%n

For instance, where are you putting the 100th MM block if there are 4 cache lines?
The 100th MM block will be in the 0th line of the cache, because 100mod4=0.

So, if there is any xth block main memory will be place in x mod #cachelines or x mod 2#cach bits

If the cache has 4 lines (line 0 to line 3). Then according to the direct mapping where the following blocks are present. 5, 6, 4,
8, 9, 12, 15, and 20

➢ 5 is present at 5%4=1➔Line 1
➢ 6 is present at 6%4=2➔Line 2
➢ 4 is present at 4%4=0➔Line 0
➢ 8 is present at 8%4=0==Line 0, but the 4th block is already present there. Here 8th block
conflicts with the 4th block, therefore line-0 4th block is going to replace by the 8th block.
➢ 9 is present at 9%4=1==Line 1, but the 5th block is already present there. Here 9th block
conflicts with the 5th block, therefore line-1 5th block is going to replace by the 9th block.
➢ 12 is present at 12%4=0==Line 0, but the 8th block is already present there. Here 12th block
conflicts with the 8th block, therefore line-0 8th block is going to replace by the 12th block.
➢ 15 is present at 15%4=3➔Line 3
Direct Mapping
15 • If each block has only one place it can
14 appear in the cache, the cache is said to be
14 direct mapped.
13
7
12
6
11
5
10
4
9
3
8
2
7
1
6
0
5
4 Cache
3
2 =(MM Block address)
1 mod
0 (Number of lines in a cache)
Main Memory (12) mod (8) =4
Drawback of direct mapping
The drawback of direct mapping is conflict miss.

Consider a example, If the cache has 4 lines (line 0 to line 3). Then according to the direct mapping where
the following blocks are present. 4, 8, 16, 16, 20, 12, and 24.

4 is get into 4%4=0==Line 0


8 is get into 8%4=0==Line 0

Even if there are many lines in the cache, they will never be used due to the direct mapping restriction. This
is the conflict miss issue with direct mapping.

Dr. Venkata Phanikrishna B, SCOPE, VIT-Vellore


Drawback of direct mapping
The drawback of direct mapping is conflict miss.

• Conflict miss is different than capacity miss.


• Capacity miss is, in case the cache doesn’t have the capacity then come
across them is a capacity miss.
• Conflict miss means, even though there is a lot of other space you are not
going to use that space and replace a place that will be missed later.
• In Direct mapping, the miss is because of the conflict not because of the
capacity.

Dr. Venkata Phanikrishna B, SCOPE, VIT-Vellore


Dr. Venkata Phanikrishna B, SCOPE, VIT-Vellore

You might also like