Cache1 2
Cache1 2
Chapter 7
Cache Memories
Memory Challenges
Ideally one desires a huge amount of very fast memory for little
cost, but:
Comparing:
Technology Access Time
SRAM
Cost/GB
0.5 5 ns
DRAM
Disk
50 70 ns
5 20 ms
$4,000 - $10,000
$100 - $200
$0.50 - $2
Recall: We used 200ps or 0.2 ns in our pipeline study. Why the difference?
Philosophically
Cache memory
What is cache?
How is it organized?
Increasing
distance
from the
processor
in access
time
L1$
8-32 bytes (block)
L2$
1 to 4 blocks
Main Memory
what is in
L1$ is a
subset of
what is in
L2$ is a
subset of
what is in
MM that is a
subset of is
in SM
Secondary Memory
Lower Level
Memory
Blk X
From Processor
Blk Y
Hit Rate: the fraction of memory accesses found in the upper level
Lower Level
Memory
Blk X
From Processor
Blk Y
registers memory
Cache
For each item of data at the lower level, there is exactly one
location in the cache where it might be - so lots of items at
the lower level must share locations in the upper level
Address mapping:
(block address) modulo (# of blocks in the cache)
Data
00
01
10
11
Q1: Is it there?
Compare the cache
tag to the high order 2
memory address bits
to tell if the memory
block is in the cache
Main Memory
0000xx
0001xx Two low order bits
0010xx define the byte in the
word (32b words)
0011xx
0100xx
0101xx
0110xx
0111xx
1000xx Q2: How do we find it?
1001xx
1010xx Use next 2 low order
1011xx memory address bits
1100xx the index to
1101xx determine which
1110xx cache block (i.e.,
1111xx modulo the number of
0 miss
00
01
00
00
00
00
Mem(0)
4 miss
4
Mem(0)
Mem(1)
Mem(2)
Mem(3)
0 1 2 3 4 3 4 15
1 miss
00
Mem(0)
00
Mem(1)
2 miss
00
00
00
3 hit
01
00
00
00
8 requests, 6 misses
Mem(4)
Mem(1)
Mem(2)
Mem(3)
Mem(0)
Mem(1)
Mem(2)
4
01
00
00
00
3 miss
00
00
00
00
hit
Mem(4)
Mem(1)
Mem(2)
Mem(3)
Mem(0)
Mem(1)
Mem(2)
Mem(3)
15 miss
01
00
00
11 00
Mem(4)
Mem(1)
Mem(2)
Mem(3)
15
Hit
Tag
...
13 12 11
20
...
2 1 0
Byte
offset
Data
10
Index
Index Valid
Tag
Data
0
1
2
.
.
.
1021
1022
1023
20
32
Strategies
allow cache and memory to be inconsistent
- write the data only into the cache block (write-back the cache
contents to the next level in the memory hierarchy when that cache
block is evicted)
- need a dirty bit for each data cache block to tell if it needs to be
written back to memory when it is evicted
Write Through *
Write No Allocate *
Write Back *
Write Allocate *
Write Back
No Write Allocate
Processor
DRAM
write buffer
Processor: writes data into the cache and the write buffer
Works fine if store frequency (w.r.t. time) << 1 / DRAM write cycle
0 miss
00
00
01
Mem(0)
0 miss
Mem(4)
01
00
01
00
0 4 0 4 0 4 0 4
4 miss
Mem(0)
4 miss
Mem(0)
00
01
00
01
0 miss
0
01
00
0 miss
0
Mem(4)
01
00
Mem(4)
4 miss
Mem(0)4
4 miss
Mem(0)
8 requests, 8 misses
First access to a block, cold fact of life, not a whole lot you can do
about it
Conflict (collision):
Capacity:
What about the relationship between cache size and block length?
stall the entire pipeline, fetch the block from the next level in the
memory hierarchy, install it in the cache and send the requested word
to the processor, then let the pipeline resume
stall the pipeline, fetch the block from next level in the memory
hierarchy, install it in the cache (which may involve having to evict a
dirty block if using a write-back cache), write the word from the
processor to the cache, then let the pipeline resume or (normally used
in write-back caches)
2.
Write allocate just write the word into the cache updating both the
tag and data, no need to check for cache hit, no need to stall or
(normally used in write-through caches with a write buffer)
3.
No-write allocate skip the cache write and just write the word to
the write buffer (and eventually to the next memory level), no need
to stall if the write buffer isnt full; must invalidate the cache block
since it will be inconsistent (now holding stale data)
Hit
Tag
13 12 11
20
...
4 32 10
Byte
offset
Data
Block offset
Index
Data
20
32
0 1 2 3 4 3 4 15
0 miss
00
Mem(1)
1 hit
Mem(0)
00
Mem(0)
Mem(2)
01
00
00
3 hit
00
00
Mem(1)
Mem(3)
Mem(1)
Mem(0)
4 miss
5
4
Mem(1) Mem(0)
Mem(3) Mem(2)
4 hit
01
00
Mem(5)
Mem(3)
00
00
2 miss
Mem(1) Mem(0)
Mem(3) Mem(2)
3 hit
01
00
Mem(5)
Mem(3)
15 miss
Mem(4)
Mem(2)
8 requests, 4 misses
1101
00
Mem(5) Mem(4)
15
14
Mem(3) Mem(2)
Mem(4)
Mem(2)
If the block size is too big relative to the cache size, the miss rate
will go up
Larger block size means larger miss penalty
Average
Access
Time
Miss
Penalty
Increased Miss
Penalty
& Miss Rate
Fewer blocks
compromises
Temporal Locality
Block Size
Block Size
Block Size
Processed the same as for single word blocks a miss returns the
entire block from memory
Cant use write allocate or will end up with a garbled block in the
cache (e.g., for 4 word blocks, a new tag, one word of data from
the new block, and three words of data from the old block), so
must fetch the block from memory first and pay the stall time
Cache Summary
Assuming cache hit costs are included as part of the normal CPU
execution cycle, then
CPU time = IC CPI CC
= IC (CPIideal + Memory-stall cycles) CC
CPIstall
Memory-stall cycles come from cache misses (a sum of readstalls and write-stalls)
Read-stall cycles = reads/program read miss rate
read miss penalty
Write-stall cycles = (writes/program write miss rate
write miss penalty)
+ write buffer stalls
The lower the CPIideal, the more pronounced the impact of stalls