0% found this document useful (0 votes)
15 views

COA Unit-4 Memory

Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views

COA Unit-4 Memory

Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 31

Unit-4 : Memory

•It is used to store data/information and instructions. It is a data storage unit or a data storage
device where data is to be processed and instructions required for processing are stored. It can store both
the input and output can be stored here.

•The memory of a computer holds (stores) program instructions (what to do), data (information), operand
(manipulated or operated upon data), and calculations (ALU results).

•The CPU controls the information stored in memory.

•Information is fetched, manipulated (under program control) and written (or written back) into memory
for immediate or later use

•The internal memory of a computer is also referred to as main memory, global memory, main storage. 5.
The secondary or auxiliary memory (also called mass storage) is provided by various peripheral devices.
Memory hierarchical pyramid

The five levels in a memory hierarchy are categorized based on speed and usage and form a pyramid.
The levels in a memory hierarchical pyramid are the following:
Level 0: CPU registers
Level 1: Cache memory
Level 2: Primary memory or main memory
Level 3: Secondary memory or magnetic disks or solid-state storage
Level 4: Tertiary memory or optical disks or magnetic tapes
Level 0: CPU registers. A CPU register is a small section of memory in a CPU that can store small
amounts of the data required to perform various operations. It loads the resulting data to the
main memory and contains the address of the memory location.
Registers are present inside the CPU and therefore have the quickest access time. They are also
the smallest in size.

Level 1: cache memory. Cache memory is required to store segments of programs or chunks of
data that are frequently accessed by the processor. When the CPU needs to access program
code or data, it first checks the cache memory. If it finds the data, it reads it quickly. If it doesn't,
it looks into the main memory to find the required data.

Level 2: Main Memory


Main memory, also known as RAM (Random Access Memory), is the primary memory of a
computer system. It has a larger storage capacity than cache memory, but it is slower. Main
memory is used to store data and instructions that are currently in use by the CPU.
Types of Main Memory
•Static RAM
•Dynamic RAM
Level 3: Secondary Storage
Secondary storage, such as hard disk drives (HDD) and solid-state drives (SSD), is a non-volatile
memory unit that has a larger storage capacity than main memory. It is used to store data and
instructions that are not currently in use by the CPU.

Level 4: Magnetic Tape


Magnetic Tape is simply a magnetic recording device that is covered with a plastic film. It is generally
used for the backup of data. In the case of a magnetic tape, the access time for a computer is a little
slower and therefore, it requires some amount of time for accessing the strip.
Semiconductor RAM Memory

A digital processing system required the facility of storing digital


information. This information is basically in the form of binary codes
and data, the data or information is stored in memory. Initially, the
information was stored using magnetic storage until the
semiconductor technologies were discovered. Semiconductor
memories are available in various types and capacities. These
memories have the advantages of small size, low cost, high speed,
and high reliability over magnetic memories.

Types of RAM
RAM is majorly categorised into two categories:
•SRAM (Static Random Access Memory)
•DRAM (Dynamic Random Access Memory)
Parameter SRAM DRAM
Full Form SRAM stands for Static Random Access Memory. DRAM stands for Dynamic Random Access Memory.

Component SRAM stores information with the help of DRAM stores data using capacitors.
transistors.

In SRAM, capacitors are not used which means In DRAM, contents of a capacitor need to be
Need to Refresh refresh is not needed. refreshed periodically.

Speed SRAM provides faster speed of data read/write. DRAM provides slower speed of data read/write.

Power Consumption SRAM consumes more power. DRAM consumes less power.

Data Life SRAM has long data life. DRAM has short data life.

Cost SRAM are expensive. DRAM are less expensive.

Density SRAM is a low density device. DRAM is a high density device.

Usage SRAMs are used as cache memory in computer and DRAMs are used as main memory in computer
other computing devices. systems.

Storage Capacity Less large


2D and 2.5D Memory organization

The internal structure of Memory either RAM or ROM is made up of memory


cells that contain a memory bit. A group of 8 bits makes a byte. The memory
is in the form of a multidimensional array of rows and columns in which, each
cell stores a bit and a complete row contains a word.

2D Memory organization
ROM:

ROM, which stands for read only memory, is a memory device or storage medium that stores information
permanently. It is also the primary memory unit of a computer along with the random access memory (RAM). It is
called read only memory as we can only read the programs and data stored on it but cannot write on it. It is
restricted to reading words that are permanently stored within the unit.

The manufacturer of ROM fills the programs into the ROM at the time of manufacturing the ROM. After this, the
content of the ROM can't be altered, which means you can't reprogram, rewrite, or erase its content later.

A simple example of ROM is the cartridge used in video game consoles that allows the system to run many
games.
Features of ROM:

• Non-Volatile Memory
• Read-Only Nature
• Permanent Storage
• Booting and Initialization
Types of ROM:

Masked Read Only Memory (MROM):


It is a hardware memory device in which programs and instructions are stored at the time of manufacturing by the
manufacturer. So it is programmed during the manufacturing process and can't be modified, reprogrammed, or erased later.

Programmable Read Only Memory (PROM):


It is manufactured as blank memory and programmed after manufacturing. You can purchase and then program it once.
Once it is programmed, the data cannot be modified later, so it is also called as one-time programmable device.
It is used in cell phones, video game consoles, medical devices, etc.

Erasable and Programmable Read Only Memory (EPROM):


EPROM is a type of ROM that can be reprogramed and erased many times. It retains its content until it is exposed to the
ultraviolet light. You need a special device called a PROM programmer or PROM burner to reprogram the EPROM.
Uses: It is used in some micro-controllers to store program.

Electrically Erasable and Programmable Read Only Memory (EEPROM):


ROM is a type of read only memory that can be erased and reprogrammed repeatedly, up to 10000 times. It is also known as
Flash EEPROM as it is similar to flash memory. It is erased and reprogrammed electrically without using ultraviolet light.

FLASH ROM:
It is an advanced version of EEPROM. The advantage of using this memory is that you can delete or write blocks of data.
Whereas, in EEPROM, you can delete or write only 1 byte of data at a time. So, this memory is faster than EEPROM. It can
be reprogrammed without removing it from the computer. Its access time is very high.
Uses: It is used for storage and transferring data between a personal computer and digital devices.
CACHE MEMORY

The data or contents of the main memory that are used frequently by CPU are stored in the cache memory
so that the processor can easily access that data in a shorter time. Whenever the CPU needs to access
memory, it first checks the cache memory. If the data is not found in cache memory, then the CPU moves into
the main memory.
Cache memory is placed between the CPU and the main memory. The block diagram for a cache memory
can be represented as:

Cache memory is organised as distinct set of blocks where each set contains a small fixed number of blocks.
CACHE PERFORMANCE

•When the CPU needs to read or write a location in the main memory that is any process requires some data, it
first checks for a corresponding entry in the cache.
•If the processor finds that the memory location is in the cache and the data is available in the cache, this is
referred to as a cache hit and data is read from the cache.
•If the processor does not find the memory location in the cache, this is referred to as a cache miss. Due to a
cache miss, the data is read from the main memory. For this cache allocates a new entry and copies data from
main memory, by assuming that data will be needed again.
The performance of cache memory is measured in a term known as "Hit ratio".

Hit ratio = Cache hit / (Cache hit + Cache miss)


= Number of Cache hits / total accesses

CACHE MEMORY DESIGN ISSUES

Cache Memory design represents the following categories:


• Block size
• Cache size
• Mapping function
• Replacement algorithm
• Write policy.
These are as follows:
1. Block Size
•Block size is the unit of information exchanged between cache and main memory. On the storage system, all
volumes share the same cache space, so that, the volumes can have only one cache block size.
•As the block size increases from small to larger sizes, the cache hit magnitude relation increases as a result of the
principle of locality and a lot of helpful data or knowledge can be brought into the cache.
•Since the block becomes even larger, the hit magnitude relation can begin to decrease.

2. Cache Size
•If the size of the cache is small it increases the performance of the system. The larger the cache, the larger the
number of gates involved in addressing the cache.

3. Mapping Function
•Cache lines or cache blocks are fixed-size blocks in which data transfer takes place between memory and cache.
When a cache line is copied into the cache from memory, a cache entry is created.
•There are fewer cache lines than memory blocks that are why we need an algorithm for mapping memory into the
cache lines.
•This is a means to determine which memory block is in which cache lines. Whenever a cache entry is created, the
mapping function determines that the cache location the block will occupy.
4. Replacement Algorithm
•If the cache already has all slots of alternative blocks are full and we want to read a line is
from memory it replaces some other line which is already in cache. The replacement
algorithmic chooses, at intervals, once a replacement block is to be loaded into the cache
then which block to interchange. We replace that block of memory is not required within the
close to future.
•Policies to replace a block are the least-recently-used (LRU) algorithm. According to this rule
most recently used is likely to be used again. Some other replacement algorithms are FIFO
(First Come First Serve), least-frequently-used.

5. Write Policy
Data can be written to memory using a variety of techniques, but the two main ones involving
cache memory are the following:
• Write-through. Data is written to both the cache and main memory at the same time.
• Write-back. Data is only written to the cache initially. Data may then be written to main
memory, but this does not need to happen and does not inhibit the interaction from taking
place.
CACHE MAPPING

Cache mapping defines how a block from the main memory is mapped to the cache memory in case
of a cache miss.

Main memory is divided into equal size partitions called as blocks or frames.
Cache memory is divided into partitions having same size as that of blocks called as lines.
During cache mapping, block of main memory is simply copied to the cache and the block is not
actually brought from the main memory.

Cache Mapping Techniques-

Cache mapping is performed using following three different techniques-

1. Direct Mapping
2. Associative Mapping (Fully Associative Mapping)
3. Set Associative Mapping (K-way Set Associative Mapping)
1. Direct Mapping-

In direct mapping,
A particular block of main memory can map only to a particular line of the cache.
The line number of cache to which a particular block can map is given by-

Cache line number= ( Main Memory Block Address ) Modulo (Number of lines in Cache)

Need of Replacement Algorithm


In direct mapping,
•There is no need of any replacement algorithm.
•This is because a main memory block can map only to a particular line of the cache.
•Thus, the new incoming block will always replace the existing block (if any) in that particular line.
Example-

Consider cache memory is divided into ‘n’ number of lines.


Then, block ‘j’ of main memory can map to line number (j mod n) only of the cache.
2. Fully Associative Mapping-

In fully associative mapping,


A block of main memory can map to any line of the cache that is freely available at that moment.
This makes fully associative mapping more flexible than direct mapping.

Need of Replacement Algorithm-

In fully associative mapping,


•A replacement algorithm is required.
•Replacement algorithm suggests the block to be replaced if all the cache lines are occupied.
•Thus, replacement algorithm like FCFS Algorithm, LRU Algorithm etc is employed.
Example-
In fig:
All the lines of cache are freely available.
Thus, any block of main memory can map to any line of the cache.
Had all the cache lines been occupied, then one of the existing blocks will have to be replaced.
3. Set Associative Mapping (K-way Set Associative Mapping) -

Set associative cache mapping combines the best of direct and associative cache mapping techniques.

In set associative mapping the cache blocks are divided in sets. It divides address into three parts i.e., Tag bits,
set number and byte offset. The bits in set number decides that in which set of the cache the required block is
present and tag bits identify which block of the main memory is present. The bits in the byte offset field gives us
the byte of the block in which the content is present.

Tag Set Number Byte Offset

Locality of Reference
Locality of reference refers to a phenomenon in which a computer program tends to access same set of
memory locations for a particular time period. In other words, Locality of Reference refers to the tendency of
the computer program to access instructions whose addresses are near one another.
Auxiliary Memory:
Auxiliary memory, often referred to as secondary memory, serves as a repository for data and programs not
actively being used by the system's primary memory. Think of it as a vast library, holding volumes of information
ready to be accessed when needed.

Why is Auxiliary Memory Important?


•Data Persistence
•Increased Storage Capacity
•System Efficiency

Types of Auxiliary Memory:


•Hard Disk Drives (HDDs)
•Solid-State Drives (SSDs)
•Flash Memory
•Magnetic Tape
•Magnetic Disk
•Optical Disks
Magnetic Disk

•Magnetic disk is a storage device that is used to write, rewrite and access data.
•It uses a magnetization process.

Architecture-

•The entire disk is divided into platters.


•Each platter consists of concentric circles called as tracks.
•These tracks are further divided into sectors which are the smallest divisions in the disk.
•A cylinder is formed by combining the tracks at a given radius of a disk pack.
•There exists a mechanical arm called as Read / Write head.
•It is used to read from and write to the disk.
•Head has to reach at a particular track and then wait for the rotation of the platter.
•The rotation causes the required sector of the track to come under the head.
•Each platter has 2 surfaces- top and bottom and both the surfaces are used to store the data.
•Each surface has its own read / write head.
Disk Performance Parameters-

The time taken by the disk to complete an I/O request is called as disk service time or disk access time.

Components that contribute to the service time are :


•Seek time
•Rotational latency
•Data transfer rate
•Controller overhead
•Queuing delay
Magnetic Tape :
In magnetic tape only one side of the ribbon is used for storing data. It is sequential memory
which contains thin plastic ribbon to store data and coated by magnetic oxide. Data read/write
speed is slower because of sequential access. It is highly reliable which requires magnetic tape
drive writing and reading data.

Let’s see various advantages and disadvantages of Magnetic Tape memory.

Advantages :
• These are inexpensive, i.e., low cost memories.
• It provides backup or archival storage.
• It can be used for large files.
• It can be used for copying from disk files.
• It is a reusable memory.
• It is compact and easy to store on racks.

Disadvantages :
Sequential access is the disadvantage, means it does not allow access randomly or directly.
It requires caring to store, i.e., vulnerable humidity, dust free, and suitable environment.
It stored data cannot be easily updated or modified, i.e., difficult to make updates on data.
Optical disk :
An optical disk is an electronic data storage medium that can be written to and read from using a low-
powered laser beam. An optical disc is a flat, usually disc-shaped object that stores information in the form of
physical variations on its surface that can be read with the aid of a beam of light.

Most of today's optical disks are available in three formats: compact disks (CDs), digital versatile disks (DVDs) --
also referred to as digital video disks -- and Blu-ray disks, which provide the highest capacities and data transfer
rates of the three.

Data is written to an optical disk in a radial pattern starting near the center. An optical disk drive uses a laser beam
to read the data from the disk as it is spinning.
Usage
Optical discs are often stored in special cases sometimes called jewel cases and are most commonly used
for digital preservation, storing music, video, or data and programs for personal computers (PC),
Durability
Although optical discs are more durable than earlier audio-visual and data storage formats, they are susceptible to
environmental and daily-use damage, if handled improperly.

Why is optical media not as popular today?


The popularity of all-optical media has greatly decreased as almost all kinds of content are available on the
internet, which can be downloaded and streamed anywhere with a network connection.
Another reason optical media is not more common in use today, the price of USB flash drives has decreased,
which has the potential to store a lot more data, which led to make optical media a less popular storage solution.
Virtual memory

Virtual memory is a memory management technique used by operating systems to give the appearance of a
large, continuous block of memory to applications, even if the physical memory (RAM) is limited. It allows the
system to compensate for physical memory shortages, enabling larger applications to run on systems with
less RAM.

It is an illusion of a memory that is larger than the real memory.

The virtual memory manager removes some components from memory to make room for other components.

How Virtual Memory Works?


Virtual Memory is a technique that is implemented using both hardware and software. It maps memory
addresses used by a program, called virtual addresses, into physical addresses in computer memory.
Implementation of Virtual Memory:
Virtual memory is implemented using Demand Paging or Demand Segmentation.

Paging
Paging divides memory into small fixed-size blocks called pages. When the computer runs out of RAM, pages that
aren’t currently in use are moved to the hard drive, into an area called a swap file. The swap file acts as an
extension of RAM. When a page is needed again, it is swapped back into RAM, a process known as page
swapping. This ensures that the operating system (OS) and applications have enough memory to run.

Segmentation
Segmentation divides virtual memory into segments of different sizes. Segments that aren’t currently needed can
be moved to the hard drive. The system uses a segment table to keep track of each segment’s status, including
whether it’s in memory, if it’s been modified, and its physical address. Segments are mapped into a process’s
address space only when needed.

Benefits of Using Virtual Memory


•Many processes maintained in the main memory.
•A process larger than the main memory can be executed because of demand paging. The OS itself loads pages
of a process in the main memory as required.
•It allows greater multiprogramming levels by using less of the available (primary) memory for each process.
•It has twice the capacity for addresses as main memory.
•It makes it possible to run more applications at once.
•Users are spared from having to add memory modules when RAM space runs out, and applications are
liberated from shared memory management.
•When only a portion of a program is required for execution, speed has increased.
•Memory isolation has increased security.
•It makes it possible for several larger applications to run at once.
•Memory allocation is comparatively cheap.
•It doesn’t require outside fragmentation.
•It is efficient to manage logical partition workloads using the CPU.
•Automatic data movement is possible.

Limitation of Virtual Memory


It can slow down the system performance, as data needs to be constantly transferred between the physical
memory and the hard disk.
It can increase the risk of data loss or corruption, as data can be lost if the hard disk fails or if there is a power
outage while data is being transferred to or from the hard disk.
It can increase the complexity of the memory management system, as the operating system needs to manage
both physical and virtual memory.

You might also like