L15 Cache Introduction
L15 Cache Introduction
Trick or treat?
Finish up performance
Start looking into memory hierarchy - Caches!
1
Improving CPI
Many processor design techniques we’ll see improve CPI
— Often they only improve CPI for certain types of instructions
n
CPI = CPI i F
i
where F
i
= I
i
i =1
Instruction Count
Fi = Fraction of instructions of type i
2
Example: CPI improvements
Base Machine:
3
Amdahl’s Law
Amdahl’s Law states that optimizations are limited in their effectiveness.
Execution
Time affected by improvement
time after Time unaffected
improvement = Amount of improvement +
by improvement
0.10 T
Execution
time after = 2 + 0.90 T = 0.95 T
improvement
What is the maximum speedup from improving floating point?
It can be hard to measure these factors in real life, but this is a useful
guide for comparing systems and designs.
Amdahl’s Law tell us how much improvement we can expect from
specific enhancements.
The best benchmarks are real programs, which are more likely to reflect
common instruction mixes.
5
How will execution time grow with SIZE?
int array[SIZE];
int A = 0;
A += array[j];
}
} TIME
Plot
SIZE
6
Actual Data
45
40
35
30
25
Series1
20
15
10
0
0 2000 4000 6000 8000 10000
7
Memory Systems and I/O
We’ve already seen how to make a fast processor. How can we supply the
CPU with enough data to keep it busy?
Part of CS378 focuses on memory and input/output issues, which are
frequently bottlenecks that limit the performance of a system.
We’ll start off by looking at memory systems and turn to I/O.
— How caches can dramatically improve the speed of memory accesses.
— How virtual memory provides security and ease of programming
— How processors, memory and peripheral devices can be connected
Processor Memory
Input/Output
8
Cache introduction
We’ll answer the following questions.
— What are the challenges of building big, fast memory systems?
— What is a cache?
— Why caches work? (answer: locality)
— How are caches organized?
• Where do we put things -and- how do we find them?
9
Large and fast
Today’s computers depend upon large and fast storage systems.
— Large storage capacities are needed for many database applications,
scientific computations with large data sets, video and music, and so
forth.
— Speed is important to keep up with our pipelined CPUs, which may
access both an instruction and data in the same clock cycle. Things
get become even worse if we move to a superscalar CPU design.
So far we’ve assumed our memories can keep up and our CPU can access
memory twice in one cycle, but as we’ll see that’s a simplification.
10
Small or slow
Unfortunately there is a tradeoff between speed, cost and capacity.
Fast memory is too expensive for most people to buy a lot of.
But dynamic memory has a much longer delay than other functional units
in a datapath. If every lw or sw accessed dynamic memory, we’d have to
either increase the cycle time or stall frequently.
Here are rough estimates of some current storage parameters.
11
Introducing caches
Wouldn’t it be nice if we could find a balance
between fast and cheap memory? CPU
We do this by introducing a cache, which is a small
amount of fast, expensive memory.
— The cache goes between the processor and the
slower, dynamic main memory. A little static
— It keeps a copy of the most frequently used data RAM (cache)
from the main memory.
Memory access speed increases overall, because we’ve
made the common case faster.
— Reads and writes to the most frequently used Lots of
addresses will be serviced by the cache. dynamic RAM
— We only need to access the slower main memory
for less frequently used data.
12
The principle of locality
It’s usually difficult or impossible to figure out what data will be “most
frequently accessed” before a program actually runs, which makes it
hard to know what to store into the small, precious cache memory.
But in practice, most programs exhibit locality, which the cache can take
advantage of.
— The principle of temporal locality says that if a program accesses one
memory address, there is a good chance that it will access the same
address again.
— The principle of spatial locality says that if a program accesses one
memory address, there is a good chance that it will also access other
nearby addresses.
13
Temporal locality in programs
The principle of temporal locality says that if a program accesses one
memory address, there is a good chance that it will access the same
address again.
Loops are excellent examples of temporal locality in programs.
— The loop body will be executed many times.
— The computer will need to access those same few locations of the
instruction memory repeatedly.
For example:
— Each instruction will be fetched over and over again, once on every
loop iteration.
14
Temporal locality in data
Programs often access the same variables over and over, especially
within loops. Below, sum and i are repeatedly read and written.
sum = 0;
for (i = 0; i < MAX; i++)
sum = sum + f(i);
15
Spatial locality in programs
The principle of spatial locality says that if a program accesses one
memory address, there is a good chance that it will also access other
nearby addresses.
16
Spatial locality in data
Programs often access data
that is stored contiguously. sum = 0;
for (i = 0; i < MAX; i++)
— Arrays, like a in the code sum = sum + a[i];
on the top, are stored in
memory contiguously.
— The individual fields of a
record or object like employee.name = “Homer Simpson”;
employee are also kept employee.boss = “Mr. Burns”;
contiguously in memory. employee.age = 45;
Can data have both spatial and
temporal locality?
17
How caches take advantage of temporal locality
The first time the processor reads from an address in
main memory, a copy of that data is also stored in CPU
the cache.
— The next time that same address is read, we can
use the copy of the data in the cache instead of
accessing the slower dynamic memory. A little static
— So the first read is a little slower than before RAM (cache)
since it goes through both main memory and the
cache, but subsequent reads are much faster.
This takes advantage of temporal locality—commonly
accessed data is stored in the faster cache memory. Lots of
dynamic RAM
18
How caches take advantage of spatial locality
When the CPU reads location i from main memory, a
copy of that data is placed in the cache. CPU
But instead of just copying the contents of location i,
we can copy several values into the cache at once,
such as the four bytes from locations i through i + 3.
— If the CPU later does need to read from locations A little static
i + 1, i + 2 or i + 3, it can access that data from RAM (cache)
the cache and not the slower main memory.
— For example, instead of reading just one array
element at a time, the cache might actually be
loading four array elements at once. Lots of
Again, the initial load incurs a performance penalty, dynamic RAM
but we’re gambling on spatial locality and the chance
that the CPU will need the extra data.
19
Other kinds of caches
The idea of caching is not specific to architecture.
— caches are used in many other situations.
20
Definitions: Hits and misses
A cache hit occurs if the cache contains the data that we’re looking for.
Hits are good, because the cache can return the data much faster than
main memory.
A cache miss occurs if the cache does not contain the requested data.
This is bad, since the CPU must then wait for the slower main memory.
There are two basic measurements of cache performance.
— The hit rate is the percentage of memory accesses that are handled
by the cache.
— The miss rate (1 - hit rate) is the percentage of accesses that must be
handled by the slower main RAM.
Typical caches have a hit rate of 95% or higher, so in fact most memory
accesses will be handled by the cache and will be dramatically faster.
In future lectures, we’ll talk more about cache performance.
22
A simple cache design
Caches are divided into blocks, which may be of various sizes.
— The number of blocks in a cache is usually a power of 2.
— For now we’ll say that each block contains one byte. This won’t take
advantage of spatial locality, but we’ll do that next time.
Here is an example cache with eight blocks, each holding one byte.
Block
index 8-bit data
000
001
010
011
100
101
110
111
23
Four important questions
Questions 1 and 2 are related—we have to know where the data is placed
if we ever hope to find it again later!
24
Where should we put data in the cache?
A direct-mapped cache is the simplest approach: each main memory
address maps to exactly one cache block.
For example, on the right
Memory
is a 16-byte main memory Address
and a 4-byte cache (four 0
1-byte blocks). 1
Memory locations 0, 4, 8 2
and 12 all map to cache 3
4 Index
block 0. 5
Addresses 1, 5, 9 and 13 6 0
map to cache block 1, etc. 7 1
8 2
How can we compute this 9 3
mapping? 10
11
12
13
14
15
25
It’s all divisions…
One way to figure out which cache block a particular memory address
should go to is to use the mod (remainder) operator.
If the cache contains 2k Memory
blocks, then the data at Address
memory address i would 0
go to cache block index 1
2
i mod 2 k
3
4 Index
For instance, with the 5
6 0
four-block cache here,
7 1
address 14 would map 2
8
to cache block 2. 9 3
10
14 mod 4 = 2 11
12
13
14
15
26
…or least-significant bits
An equivalent way to find the placement of a memory address in the
cache is to look at the least significant k bits of the address.
With our four-byte cache
Memory
we would inspect the two Address
least significant bits of 0000
our memory addresses. 0001
Again, you can see that 0010
address 14 (1110 in binary) 0011
0100 Index
maps to cache block 2 0101
(10 in binary). 0110 00
Taking the least k bits of 0111 01
1000 10
a binary value is the same
1001 11
as computing that value
1010
mod 2 .k
1011
1100
1101
1110
1111
27
How can we find data in the cache?
The second question was how to determine whether or not the data
we’re interested in is already stored in the cache.
If we want to read memory Memory
address i, we can use the Address
mod trick to determine 0
which cache block would 1
contain i. 2
3
But other addresses might 4 Index
also map to the same cache 5
block. How can we 6 0
1
distinguish between them? 7
8 2
For instance, cache block 9 3
2 could contain data from 10
addresses 2, 6, 10 or 14. 11
12
13
14
15
28
Adding tags
We need to add tags to the cache, which supply the rest of the address
bits to let us distinguish between different memory locations that map to
the same cache block.
0000
0001
0010
0011
0100
0101 Index Tag Data
0110 00 00
0111 01 ??
1000 10 01
1001 11 01
1010
1011
1100
1101
1110
1111
29
Figuring out what’s in the cache
Now we can tell exactly which addresses of main memory are stored in
the cache, by concatenating the cache block tags with the block indices.
Main memory
Index Tag Data address in cache block
00 00 00 + 00 = 0000
01 11 11 + 01 = 1101
10 01 01 + 10 = 0110
11 01 01 + 11 = 0111
31
One more detail: the valid bit
When started, the cache is empty and does not contain valid data.
We should account for this by adding a valid bit for each cache block.
— When the system is initialized, all the valid bits are set to 0.
— When data is loaded into a particular cache block, the corresponding
valid bit is set to 1.
So the cache contains more than just copies of the data in memory; it
also has bits to help us find data within the cache and verify its validity.
32
What happens on a cache hit
When the CPU tries to read from memory, the address will be sent to a
cache controller.
— The lowest k bits of the address will index a block in the cache.
— If the block is valid and the tag matches the upper (m - k) bits of the
m-bit address, then that data will be sent to the CPU.
Here is a diagram of a 32-bit memory address and a 210-byte cache.
Tag
= Hit
34
What happens on a cache miss
The delays that we’ve been assuming for memories (e.g., 2ns) are really
assuming cache hits.
— If our CPU implementations accessed main memory directly, their
cycle times would have to be much larger.
— Instead we assume that most memory accesses will be cache hits,
which allows us to use a shorter cycle time.
However, a much slower main memory access is needed on a cache miss.
The simplest thing to do is to stall the pipeline until the data from main
memory can be fetched (and also copied into the cache).
35
Loading a block into the cache
After data is read from main memory, putting a copy of that data into
the cache is straightforward.
— The lowest k bits of the address specify a cache block.
— The upper (m - k) address bits are stored in the block’s tag field.
— The data from main memory is stored in the block’s data field.
— The valid bit is set to 1.
...
Data ...
36
What if the cache fills up?
Our third question was what to do if we run out of space in our cache, or
if we need to reuse a block for a different memory address.
We answered this question implicitly on the last page!
— A miss causes a new block to be loaded into the cache, automatically
overwriting any previously stored data.
— This is a least recently used replacement policy, which assumes that
older data is less likely to be requested than newer data.
We’ll see a few other policies next.
37
Summary
38