0% found this document useful (0 votes)
2 views

CH7_OS-1

Chapter 7 discusses memory management, focusing on logical vs physical addresses, swapping, and memory allocation techniques. It covers the advantages and disadvantages of swapping, fragmentation, paging, and segmentation, as well as virtual memory and demand paging. The chapter also explains various page replacement algorithms, including FIFO, Optimal, and Least Recently Used (LRU).

Uploaded by

igxgamer299
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

CH7_OS-1

Chapter 7 discusses memory management, focusing on logical vs physical addresses, swapping, and memory allocation techniques. It covers the advantages and disadvantages of swapping, fragmentation, paging, and segmentation, as well as virtual memory and demand paging. The chapter also explains various page replacement algorithms, including FIFO, Optimal, and Least Recently Used (LRU).

Uploaded by

igxgamer299
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 41

Chapter 7

Memory Management
Logical Vs Physical Address
• A logical address, also known as a virtual address, is an address generated by the CPU
during program execution. It is the address seen by the process and is relative to the
program’s address space. The process accesses memory using logical addresses, which
are translated by the operating system into physical addresses. An address that is
created by the CPU while a program is running is known as a logical address.
• A physical address is the actual address in the main memory where data is stored. It is a
location in physical memory, as opposed to a virtual address. Physical addresses are used
by the Memory Management Unit (MMU) to translate logical addresses into physical
addresses. The user must use the corresponding logical address to go to the physical
address rather than directly accessing the physical address.
• Both logical and physical addresses are used to identify a specific location in memory.

• Both logical and physical addresses can be represented in different formats, such as binary,
hexadecimal, or decimal.

• Both logical and physical addresses have a finite range, which is determined by the number of bits
used to represent them.
Diagram
Swapping
• Swapping in an operating system is a process that moves data or programs
between the computer’s main memory (RAM) and a secondary storage
(usually a hard disk or SSD). This helps manage the limited space in RAM and
allows the system to run more programs than it could otherwise handle
simultaneously.
Process of Swapping

• When the RAM is full and a new program needs to run,


the operating system selects a program or data that is
currently in RAM but not actively being used.

• The selected data is moved to secondary storage,


freeing up space in RAM for the new program

• When the swapped-out program is needed again, it can


be swapped back into RAM, replacing another inactive
program or data if necessary.
Advantanges & Disadvantages
• Swapping minimizes the waiting time for processes to be executed by using
the swap space as an extension of RAM, allowing the CPU to keep working
efficiently without long delays due to memory limitations.
• Swapping allows the operating system to free up space in the main memory
(RAM) by moving inactive or less critical data to secondary storage (like a
hard drive or SSD). This ensures that the available RAM is used for the most
active processes and applications, which need it the most for optimal
performance.

• Using only single main memory, multiple process can be run by CPU using
swap partition.

• It allows larger programs or applications to run on systems with limited


physical memory by swapping less critical data to secondary storage and
loading the necessary parts into RAM.
Disadvantages
• Risk of data loss during swapping arises because of the dependency
on secondary storage for temporary data retention. If the system
loses power before this data is safely written back into RAM or saved
properly, it can result in the loss of important data, files, or system
states.

• The number of page faults increases as the system frequently


swaps pages in and out of memory, which directly impacts
performance because fetching data from the disk (or swap space) is
much slower than accessing it from RAM.

• If the system Swaps-in and out too often, the performance of system
can severely decline as CPU will spend more time swapping then
executing processes.
Dynamic Linking & Loadig
Linking Loading
The process of collecting and maintaining pieces of code and
data into a single file is known as Linking in the operating Loading is the process of loading the program from secondary
system. memory to the main memory for execution.

Linking is used to join all the modules. Loading is used to allocate the address to all executable files
and this task is done by the loader.

Linking is performed with the help of Linker. In an


operating system, Linker is a program that helps to link object A loader is a program that places programs into memory and
modules of a program into a single object file. It is also called prepares them for execution.
a link editor.
Linkers are an important part of the software development
process because they enable separate compilation. Apart
from that organizing a large application as one monolithic The loader is responsible for the allocation, linking, relocation,
source file, we can decompose it into smaller, more and loading of the operating system
manageable modules that can be modified and compiled
separately.
Diagram
Continuous memory allocation
• Continuous memory allocation is a memory management technique in
which memory is allocated to processes in contiguous blocks. This
ensures that memory is utilized efficiently, with minimal
fragmentation and wasted memory. The technique simplifies memory
management by allowing the operating system to manage memory in
larger blocks, leading to faster access to memory and improved system
performance. Overall, continuous memory allocation is an important
technique used by operating systems to efficiently manage memory
resources and ensure that memory is effectively utilized.
Techniques for Contiguous Memory Allocation Input
Queues

• Continuous blocks of memory assigned to processes


cause the main memory to always be full. A procedure,
however, leaves behind an empty block termed as a
hole after it is finished. A new procedure could
potentially be implemented in this area. As a result,
there are processes and holes in the main memory, and
each one of these holes might be assigned to a new
process that comes in.
Techniques for Contiguous Memory Allocation Input
Queues

• First-Fit
• This is a fairly straightforward technique where we start at the beginning and assign the
first hole, which is large enough to meet the needs of the process. The first-fit technique
can also be applied so that we can pick up where we left off in our previous search for the
first-fit hole.
• Best-Fit
• The goal of this greedy method, which allocates the smallest hole that meets the needs of
the process, is to minimise any memory that would otherwise be lost due to internal
fragmentation in the event of static partitioning. Therefore, in order to select the greatest
match for the procedure without wasting memory, we must first sort the holes according
to their diameters.
• Worst-Fit
• The Best-Fit strategy is in opposition to this one. The largest hole is chosen to be assigned
to the incoming process once the holes are sorted based on size. The theory behind this
allocation is that because the process is given a sizable hole, it will have a lot of internal
fragmentation left over. As a result, a hole will be left behind that can house a few
additional processes.
Fragmentation
• Fragmentation is an unwanted problem in the operating
system in which the processes are loaded and unloaded
from memory, and free memory space is fragmented.
Processes can't be assigned to memory blocks due to
their small size, and the memory blocks stay unused.
Types of Fragmentation

• There are mainly two types of fragmentation in the operating system.


These are as follows:
1.Internal Fragmentation
2.External Fragmentation
• Internal Fragmentation
• When a process is allocated to a memory block, and if the process is
smaller than the amount of memory requested, a free space is created in
the given memory block. Due to this, the free space of the memory block
is unused, which causes internal fragmentation.
• For Example:
• Assume that memory allocation in RAM is done using fixed partitioning
(i.e., memory blocks of fixed sizes). 2MB, 4MB, 4MB, and 8MB are the
available sizes. The Operating System uses a part of this RA
Let's suppose a process P1 with a size of 3MB arrives and is given a memory
block of 4MB. As a result, the 1MB of free space in this block is unused and
cannot be used to allocate memory to another process. It is known as internal
fragmentation.
External Fragmentation
• External fragmentation happens when a dynamic memory allocation method
allocates some memory but leaves a small amount of memory unusable. The
quantity of available memory is substantially reduced if there is too much
external fragmentation. There is enough memory space to complete a
request, but it is not contiguous. It's known as external fragmentation.
• Let's take the example of external fragmentation. In the diagram, you can see
that there is sufficient space (50 KB) to run a process (05) (need 45KB), but
the memory is not contiguous. You can use compaction, paging, and
segmentation to use the free space to execute a process.
Paging
• Paging is a storage mechanism used to retrieve
processes from the secondary storage into the main
memory in the form of pages.
• The main idea behind the paging is to divide each
process in the form of pages. The main memory will also
be divided in the form of frames.
• One page of the process is to be stored in one of the
frames of the memory. The pages can be stored at the
different locations of the memory but the priority is
always to find the contiguous frames or holes.
• Pages of the process are brought into the main memory
only when they are required otherwise they reside in
Memory Management Unit
• The purpose of Memory Management Unit (MMU) is to convert
the logical address into the physical address. The logical address
is the address generated by the CPU for every page while the
physical address is the actual address of the frame where each
page will be stored.
• When a page is to be accessed by the CPU by using the logical
address, the operating system needs to obtain the physical
address to access that page physically.
• The logical address has two parts.
1.Page Number
2.Offset
• Memory management unit of OS needs to convert the page
number to the frame number.
Hardware implementation of
Paging
• The hardware implementation of the page table can be done
by using dedicated registers. But the usage of the register
for the page table is satisfactory only if the page table is
small. If the page table contains a large number of entries
then we can use TLB(translation Look-aside buffer), a special,
small, fast look-up hardware cache.
• The TLB is associative, high-speed memory.

• Each entry in TLB consists of two parts: a tag and a value.

• When this memory is used, then an item is compared with all


tags simultaneously. If the item is found, then the
corresponding value is returned.
Advantages of Paging

• Eliminates External Fragmentation: Paging divides memory into fixed-size blocks


(pages and frames), so processes can be loaded wherever there is free space in
memory. This prevents wasted space due to fragmentation.

• Efficient Memory Utilization: Since pages can be placed in non-contiguous memory


locations, even small free spaces can be utilized, leading to better memory allocation.

• Supports Virtual Memory: Paging enables the implementation of virtual memory,


allowing processes to use more memory than physically available by swapping pages
between RAM and secondary storage.

• Ease of Swapping: Individual pages can be moved between physical memory and
disk (swap space) without affecting the entire process, making swapping faster and
more efficient.

• Improved Security and Isolation: Each process works within its own set of pages,
preventing one process from accessing another’s memory space.
Disadvantages of Paging

• Internal Fragmentation: If the size of a process is not a perfect multiple of the page
size, the unused space in the last page results in internal fragmentation.

• Increased Overhead: Maintaining the Page Table requires additional memory and
processing. For large processes, the page table can grow significantly, consuming
valuable memory resources.

• Page Table Lookup Time: Accessing memory requires translating logical addresses to
physical addresses using the page table. This additional step increases memory access
time, although Translation Lookaside Buffers (TLBs) can help reduce the impact.

• I/O Overhead During Page Faults: When a required page is not in physical memory
(page fault), it needs to be fetched from secondary storage, causing delays and
increased I/O operations.

• Complexity in Implementation: Paging requires sophisticated hardware and software


support, including the Memory Management Unit (MMU) and algorithms for
page replacement, which add complexity to the system.
Segmentation
• Segmentation is another memory management
technique that gives the user's view of a process. The
user's view is mapped into the Physical Memory.
• In Segmentation, a process is divided into multiple
segments. The size of each segment is not necessarily
the same which is different from paging. In Paging, a
process was divided into equal partitions called pages.
The module contained in a segment decides the size of
the segment.
Virtual Memory
• Virtual memory is a memory management technique
used by operating systems to give the appearance of a
large, continuous block of memory to applications, even
if the physical memory (RAM) is limited. It allows larger
applications to run on systems with less RAM.
• The main objective of virtual memory is to support
multiprogramming,
• Programs can be larger than the available physical
memory.
Demand Paging
• Demand paging is a technique used in virtual memory
systems where pages enter main memory only when
requested or needed by the CPU. In demand paging, the
operating system loads only the necessary pages of a
program into memory at runtime, instead of loading the
entire program into memory at the start. A page fault
occurred when the program needed to access a page that is
not currently in memory.
• The operating system then loads the required pages from the
disk into memory and updates the page tables accordingly.
This process is transparent to the running program and it
continues to run as if the page had always been in memory.
Page Fault
• The term “page miss” or “page fault” refers to a
situation where a referenced page is not found in the
main memory.
• When a program tries to access a page, or fixed-size
block of memory, that isn’t currently loaded in physical
memory (RAM), an exception known as a page fault
happens. Before enabling the program to access a page
that is required, the operating system must bring it into
memory from secondary storage (such a hard drive) in
order to handle a page fault.
Page Replacement
• First In First Out (FIFO)

• Optimal Page replacement

• Least Recently Used (LRU)

• Most Recently Used (MRU)


FIFO
• This is the simplest page replacement algorithm. In this
algorithm, the operating system keeps track of all pages
in the memory in a queue, the oldest page is in the front
of the queue. When a page needs to be replaced page
in the front of the queue is selected for removal.
Example 1: Consider page reference string 1, 3,
0, 3, 5, 6, 3 with 3-page frames. Find the number
of page faults using FIFO Page Replacement
Algorithm.
Optimal Page Replacement

• In this algorithm, pages are replaced which would not


be used for the longest duration of time in the future.
• Optimal page replacement is perfect, but not possible in
practice as the operating system cannot know future
requests. The use of Optimal Page replacement is to set
up a benchmark so that other replacement algorithms
can be analyzed against it.
Example: Consider the page references 7, 0, 1, 2, 0, 3, 0, 4, 2,
3, 0, 3, 2, 3 with 4-page frame. Find number of page fault using
OPR

Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page
faults
0 is already there so —> 0 Page fault. when 3 came it will take the place of 7 because it
is not used for the longest duration of time in the future—> 1 Page fault. 0 is already
there so —> 0 Page fault. 4 will takes place of 1 —> 1 Page Fault.
Least Recently Used

• In this algorithm, page will be replaced which is least recently


used.
• The least recently used page replacement algorithm keeps the
track of usage of pages over a period of time. This algorithm
works on the basis of the principle of locality of a
reference which states that a program has a tendency to
access the same set of memory locations repetitively over a
short period of time. So pages that have been used heavily in
the past are most likely to be used heavily in the future also.
• In this algorithm, when a page fault occurs, then the page
that has not been used for the longest duration of time
is replaced by the newly requested page.
Example: Let’s see the performance of the LRU
on the same reference string of 3, 1, 2, 1, 6, 5, 1,
3 with 3-page frames:

Total page faults = 7


Last In First Out (LIFO) Page Replacement
Algorithm

• This is the Last in First Out algorithm and works


on LIFO principles. In this algorithm, the newest page is
replaced by the requested page. Usually, this is done
through a stack, where we maintain a stack of pages
currently in the memory with the newest page being at
the top. Whenever a page fault occurs, the page at the
top of the stack is replaced.
Example: Let’s see how the LIFO performs for our
example string of 3, 1, 2, 1, 6, 5, 1, 3 with 3-page
frames:

Total page faults = 5


Most Recently Used (MRU)

• In this algorithm, page will be replaced which has been


used recently. Belady’s anomaly can occur in this
algorithm.
• It is said that Belady's Anomaly occurs whenever the
number of page faults increases significantly when the
number of frames increases.
Example 4: Consider the page reference string 7, 0, 1, 2,
0, 3, 0, 4, 2, 3, 0, 3, 2, 3 with 4-page frames. Find number
of page faults using MRU Page Replacement Algorithm.
Explanation
• Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4
Page faults
• 0 is already their so–> 0 page fault
• when 3 comes it will take place of 0 because it is most recently used —> 1 Page
fault
• when 0 comes it will take place of 3 —> 1 Page fault
• when 4 comes it will take place of 0 —> 1 Page fault
• 2 is already in memory so —> 0 Page fault
• when 3 comes it will take place of 2 —> 1 Page fault
• when 0 comes it will take place of 3 —> 1 Page fault
• when 3 comes it will take place of 0 —> 1 Page fault
• when 2 comes it will take place of 3 —> 1 Page fault
• when 3 comes it will take place of 2 —> 1 Page fault
Thank you

You might also like