0% found this document useful (0 votes)
57 views

BCS402 - Module 5 edited.pptx (1)

The document discusses cache memory, which serves as a high-speed buffer between the CPU and main memory to enhance data access efficiency. It explains the relationship between cache and main memory, the architecture of cache controllers, and concepts like thrashing and set associativity in cache design. Additionally, it covers logical and physical cache distinctions, cache memory management, and replacement policies for maintaining data consistency.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
57 views

BCS402 - Module 5 edited.pptx (1)

The document discusses cache memory, which serves as a high-speed buffer between the CPU and main memory to enhance data access efficiency. It explains the relationship between cache and main memory, the architecture of cache controllers, and concepts like thrashing and set associativity in cache design. Additionally, it covers logical and physical cache distinctions, cache memory management, and replacement policies for maintaining data consistency.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

Microcontroller

MODULE 5
CACHE MEMORY
(BCS402)
The Memory Hierarchy and Cache Memory

Cache
● Cache means “a concealed place for storage.”
● A cache is a small, fast array of memory placed between the processor
core and main memory that stores portions of recently
● Often used with a cache is a write buffer—a very small first-in-first-out
(FIFO) memory placed between the processor core and main memory.
● The purpose of a write buffer is to free the processor core and cache
memory from the slow write time associated with writing to main
memory.

● SRAM:Static Random Access Memory
● Cache memory holds recently and frequently accessed
files
Relationship between Cache Processor and Main Memory
Relationship between Cache Processor and Main Memory
The cache processor and main memory work together to ensure that the CPU
has fast access to the data and instructions it needs to execute programs
efficiently, with the cache serving as a high-speed buffer between the CPU and
the slower main memory.
➔ Cache Memory: The cache processor, or CPU cache, is a small amount of
high-speed memory located directly on the CPU chip.
➔ Main Memory (RAM): Main memory, often referred to as RAM (Random
Access Memory), is the primary storage area in a computer where data is
stored that is actively being used or manipulated by the CPU.
➔ Cache Hierarchy: Modern computer architectures typically have multiple
levels of cache (L1, L2, L3) with decreasing size and increasing latency as
you move farther away from the CPU cores.
➔ Memory Hierarchy:The relationship between cache and main memory is part
of the broader memory hierarchy, which also includes secondary storage
devices like hard drives
Logical and Physical Cache
Logical and Physical Cache:physical cache refers to the actual hardware implementation of
cache memory, including its components and physical addressing, while logical cache refers to
the abstract representation of cache memory, including its size, organization, and operational
policies, as perceived by software and system designers.

Physical Cache:

● Structure: Physical cache consists of actual memory cells (SRAM cells) organized into cache lines
or blocks.
● Hardware Components: It includes cache controllers, memory arrays, tag arrays, multiplexers, and
other circuitry necessary for cache operations.
● Physical Addressing: Physical cache uses physical memory addresses to access data and store
metadata like tags and valid bits.

Logical Cache:

● Conceptual Representation: Logical cache represents the cache as seen by software or system
designers.
● Properties: It defines cache size, associativity (how many blocks can map to the same set), line size
(block size), replacement policy (e.g., LRU - Least Recently Used), and write policy (e.g.,
write-through or write-back).
● Virtual Addressing: Logical cache typically interacts with virtual memory addresses and may
translate them to physical addresses for cache operations.
Basic Architecture of Cache Controller
Interface with CPU and Memory:
● The cache controller interfaces directly with the CPU and main memory (RAM) to facilitate data
transfer between them.
Cache Memory Management:
● The cache controller manages the allocation and deallocation of cache lines within the cache
memory.
● It decides which cache lines to store data in, based on memory access patterns and cache
replacement policies.
● The controller also ensures coherence between the cache and main memory, ensuring that data
remains consistent across both.
Address Translation:

Cache Access and Control:


● When the CPU requests data from memory, the cache controller determines whether the data is present in
the cache (cache hit) or needs to be fetched from main memory (cache miss).
● Cache Coherency:
a. In multiprocessor systems, the cache controller ensures cache coherency by maintaining consistency
between multiple caches sharing the same memory.
b. It implements coherence protocols (e.g., MESI - Modified, Exclusive, Shared, Invalid) to manage data
sharing and synchronization between caches.
c. Control Logic

Thrashing
Thrashing

Thrashing is a phenomenon in computer systems where


excessive paging or swapping of data occurs, leading to a
significant decrease in system performance. It happens when the
system spends more time swapping data between main memory
(RAM) and secondary storage (disk) than executing actual useful
work
● High Demand for Memory
● Insufficient Memory
● Frequent Page Faults
● Constant Swapping
● Decreased Performance
Set Associativity
Set Associativity:Set associativity is a concept used in cache memory
design, particularly in the context of cache mapping techniques

In a set-associative cache:

● Cache Sets
● Associativity
● Mapping: Memory addresses are hashed or indexed to determine which set they belong to.
● Set associativity is often denoted as N-way set-associative, where N represents the
associativity level.
● Direct-Mapped Cache (1-way set-associative): Each memory block maps to exactly one cache
block. There is only one cache block per set.
● 2-way Set-Associative Cache: Each set contains two cache blocks. A memory block can map
to one of the two cache blocks within the set.
● 4-way Set-Associative Cache: Each set contains four cache blocks. A memory block can map
to one of the four cache blocks within the set.
Main memory mapping to a set Associative Cache
Main memory mapping to a set associative mapping
Address Partitioning:
● A memory address is divided into multiple fields: tag bits, index bits, and
offset bits.
● Tag bits uniquely identify the memory block..
Set Selection:
● The index bits are used to select the cache set for storing or retrieving data.
● The number of index bits determines the number of sets in the cache. For
example, if there are S sets in the cache, then log2(S) bits are used for
indexing.
Tag Comparison:
● Once the set is selected based on the index bits, the tag bits of the memory
address are compared with the tags stored in the selected set.
Replacement Policy:
● If the set is full and a cache block needs to be replaced, a replacement policy
(e.g., Least Recently Used - LRU) determines which cache block to evict to
make room for the new data.
4KB cache consisting of 256 cachelines of four 32 bit
words
Main memory mapping to a direct mapped cache
MEMORY HIERARCHY

You might also like