COA Unit-4 Memory
COA Unit-4 Memory
•It is used to store data/information and instructions. It is a data storage unit or a data storage
device where data is to be processed and instructions required for processing are stored. It can store both
the input and output can be stored here.
•The memory of a computer holds (stores) program instructions (what to do), data (information), operand
(manipulated or operated upon data), and calculations (ALU results).
•Information is fetched, manipulated (under program control) and written (or written back) into memory
for immediate or later use
•The internal memory of a computer is also referred to as main memory, global memory, main storage. 5.
The secondary or auxiliary memory (also called mass storage) is provided by various peripheral devices.
Memory hierarchical pyramid
The five levels in a memory hierarchy are categorized based on speed and usage and form a pyramid.
The levels in a memory hierarchical pyramid are the following:
Level 0: CPU registers
Level 1: Cache memory
Level 2: Primary memory or main memory
Level 3: Secondary memory or magnetic disks or solid-state storage
Level 4: Tertiary memory or optical disks or magnetic tapes
Level 0: CPU registers. A CPU register is a small section of memory in a CPU that can store small
amounts of the data required to perform various operations. It loads the resulting data to the
main memory and contains the address of the memory location.
Registers are present inside the CPU and therefore have the quickest access time. They are also
the smallest in size.
Level 1: cache memory. Cache memory is required to store segments of programs or chunks of
data that are frequently accessed by the processor. When the CPU needs to access program
code or data, it first checks the cache memory. If it finds the data, it reads it quickly. If it doesn't,
it looks into the main memory to find the required data.
Types of RAM
RAM is majorly categorised into two categories:
•SRAM (Static Random Access Memory)
•DRAM (Dynamic Random Access Memory)
Parameter SRAM DRAM
Full Form SRAM stands for Static Random Access Memory. DRAM stands for Dynamic Random Access Memory.
Component SRAM stores information with the help of DRAM stores data using capacitors.
transistors.
In SRAM, capacitors are not used which means In DRAM, contents of a capacitor need to be
Need to Refresh refresh is not needed. refreshed periodically.
Speed SRAM provides faster speed of data read/write. DRAM provides slower speed of data read/write.
Power Consumption SRAM consumes more power. DRAM consumes less power.
Data Life SRAM has long data life. DRAM has short data life.
Usage SRAMs are used as cache memory in computer and DRAMs are used as main memory in computer
other computing devices. systems.
2D Memory organization
ROM:
ROM, which stands for read only memory, is a memory device or storage medium that stores information
permanently. It is also the primary memory unit of a computer along with the random access memory (RAM). It is
called read only memory as we can only read the programs and data stored on it but cannot write on it. It is
restricted to reading words that are permanently stored within the unit.
The manufacturer of ROM fills the programs into the ROM at the time of manufacturing the ROM. After this, the
content of the ROM can't be altered, which means you can't reprogram, rewrite, or erase its content later.
A simple example of ROM is the cartridge used in video game consoles that allows the system to run many
games.
Features of ROM:
• Non-Volatile Memory
• Read-Only Nature
• Permanent Storage
• Booting and Initialization
Types of ROM:
FLASH ROM:
It is an advanced version of EEPROM. The advantage of using this memory is that you can delete or write blocks of data.
Whereas, in EEPROM, you can delete or write only 1 byte of data at a time. So, this memory is faster than EEPROM. It can
be reprogrammed without removing it from the computer. Its access time is very high.
Uses: It is used for storage and transferring data between a personal computer and digital devices.
CACHE MEMORY
The data or contents of the main memory that are used frequently by CPU are stored in the cache memory
so that the processor can easily access that data in a shorter time. Whenever the CPU needs to access
memory, it first checks the cache memory. If the data is not found in cache memory, then the CPU moves into
the main memory.
Cache memory is placed between the CPU and the main memory. The block diagram for a cache memory
can be represented as:
Cache memory is organised as distinct set of blocks where each set contains a small fixed number of blocks.
CACHE PERFORMANCE
•When the CPU needs to read or write a location in the main memory that is any process requires some data, it
first checks for a corresponding entry in the cache.
•If the processor finds that the memory location is in the cache and the data is available in the cache, this is
referred to as a cache hit and data is read from the cache.
•If the processor does not find the memory location in the cache, this is referred to as a cache miss. Due to a
cache miss, the data is read from the main memory. For this cache allocates a new entry and copies data from
main memory, by assuming that data will be needed again.
The performance of cache memory is measured in a term known as "Hit ratio".
2. Cache Size
•If the size of the cache is small it increases the performance of the system. The larger the cache, the larger the
number of gates involved in addressing the cache.
3. Mapping Function
•Cache lines or cache blocks are fixed-size blocks in which data transfer takes place between memory and cache.
When a cache line is copied into the cache from memory, a cache entry is created.
•There are fewer cache lines than memory blocks that are why we need an algorithm for mapping memory into the
cache lines.
•This is a means to determine which memory block is in which cache lines. Whenever a cache entry is created, the
mapping function determines that the cache location the block will occupy.
4. Replacement Algorithm
•If the cache already has all slots of alternative blocks are full and we want to read a line is
from memory it replaces some other line which is already in cache. The replacement
algorithmic chooses, at intervals, once a replacement block is to be loaded into the cache
then which block to interchange. We replace that block of memory is not required within the
close to future.
•Policies to replace a block are the least-recently-used (LRU) algorithm. According to this rule
most recently used is likely to be used again. Some other replacement algorithms are FIFO
(First Come First Serve), least-frequently-used.
5. Write Policy
Data can be written to memory using a variety of techniques, but the two main ones involving
cache memory are the following:
• Write-through. Data is written to both the cache and main memory at the same time.
• Write-back. Data is only written to the cache initially. Data may then be written to main
memory, but this does not need to happen and does not inhibit the interaction from taking
place.
CACHE MAPPING
Cache mapping defines how a block from the main memory is mapped to the cache memory in case
of a cache miss.
Main memory is divided into equal size partitions called as blocks or frames.
Cache memory is divided into partitions having same size as that of blocks called as lines.
During cache mapping, block of main memory is simply copied to the cache and the block is not
actually brought from the main memory.
1. Direct Mapping
2. Associative Mapping (Fully Associative Mapping)
3. Set Associative Mapping (K-way Set Associative Mapping)
1. Direct Mapping-
In direct mapping,
A particular block of main memory can map only to a particular line of the cache.
The line number of cache to which a particular block can map is given by-
Cache line number= ( Main Memory Block Address ) Modulo (Number of lines in Cache)
Set associative cache mapping combines the best of direct and associative cache mapping techniques.
In set associative mapping the cache blocks are divided in sets. It divides address into three parts i.e., Tag bits,
set number and byte offset. The bits in set number decides that in which set of the cache the required block is
present and tag bits identify which block of the main memory is present. The bits in the byte offset field gives us
the byte of the block in which the content is present.
Locality of Reference
Locality of reference refers to a phenomenon in which a computer program tends to access same set of
memory locations for a particular time period. In other words, Locality of Reference refers to the tendency of
the computer program to access instructions whose addresses are near one another.
Auxiliary Memory:
Auxiliary memory, often referred to as secondary memory, serves as a repository for data and programs not
actively being used by the system's primary memory. Think of it as a vast library, holding volumes of information
ready to be accessed when needed.
•Magnetic disk is a storage device that is used to write, rewrite and access data.
•It uses a magnetization process.
Architecture-
The time taken by the disk to complete an I/O request is called as disk service time or disk access time.
Advantages :
• These are inexpensive, i.e., low cost memories.
• It provides backup or archival storage.
• It can be used for large files.
• It can be used for copying from disk files.
• It is a reusable memory.
• It is compact and easy to store on racks.
Disadvantages :
Sequential access is the disadvantage, means it does not allow access randomly or directly.
It requires caring to store, i.e., vulnerable humidity, dust free, and suitable environment.
It stored data cannot be easily updated or modified, i.e., difficult to make updates on data.
Optical disk :
An optical disk is an electronic data storage medium that can be written to and read from using a low-
powered laser beam. An optical disc is a flat, usually disc-shaped object that stores information in the form of
physical variations on its surface that can be read with the aid of a beam of light.
Most of today's optical disks are available in three formats: compact disks (CDs), digital versatile disks (DVDs) --
also referred to as digital video disks -- and Blu-ray disks, which provide the highest capacities and data transfer
rates of the three.
Data is written to an optical disk in a radial pattern starting near the center. An optical disk drive uses a laser beam
to read the data from the disk as it is spinning.
Usage
Optical discs are often stored in special cases sometimes called jewel cases and are most commonly used
for digital preservation, storing music, video, or data and programs for personal computers (PC),
Durability
Although optical discs are more durable than earlier audio-visual and data storage formats, they are susceptible to
environmental and daily-use damage, if handled improperly.
Virtual memory is a memory management technique used by operating systems to give the appearance of a
large, continuous block of memory to applications, even if the physical memory (RAM) is limited. It allows the
system to compensate for physical memory shortages, enabling larger applications to run on systems with
less RAM.
The virtual memory manager removes some components from memory to make room for other components.
Paging
Paging divides memory into small fixed-size blocks called pages. When the computer runs out of RAM, pages that
aren’t currently in use are moved to the hard drive, into an area called a swap file. The swap file acts as an
extension of RAM. When a page is needed again, it is swapped back into RAM, a process known as page
swapping. This ensures that the operating system (OS) and applications have enough memory to run.
Segmentation
Segmentation divides virtual memory into segments of different sizes. Segments that aren’t currently needed can
be moved to the hard drive. The system uses a segment table to keep track of each segment’s status, including
whether it’s in memory, if it’s been modified, and its physical address. Segments are mapped into a process’s
address space only when needed.