CH7_OS-1
CH7_OS-1
Memory Management
Logical Vs Physical Address
• A logical address, also known as a virtual address, is an address generated by the CPU
during program execution. It is the address seen by the process and is relative to the
program’s address space. The process accesses memory using logical addresses, which
are translated by the operating system into physical addresses. An address that is
created by the CPU while a program is running is known as a logical address.
• A physical address is the actual address in the main memory where data is stored. It is a
location in physical memory, as opposed to a virtual address. Physical addresses are used
by the Memory Management Unit (MMU) to translate logical addresses into physical
addresses. The user must use the corresponding logical address to go to the physical
address rather than directly accessing the physical address.
• Both logical and physical addresses are used to identify a specific location in memory.
• Both logical and physical addresses can be represented in different formats, such as binary,
hexadecimal, or decimal.
• Both logical and physical addresses have a finite range, which is determined by the number of bits
used to represent them.
Diagram
Swapping
• Swapping in an operating system is a process that moves data or programs
between the computer’s main memory (RAM) and a secondary storage
(usually a hard disk or SSD). This helps manage the limited space in RAM and
allows the system to run more programs than it could otherwise handle
simultaneously.
Process of Swapping
• Using only single main memory, multiple process can be run by CPU using
swap partition.
• If the system Swaps-in and out too often, the performance of system
can severely decline as CPU will spend more time swapping then
executing processes.
Dynamic Linking & Loadig
Linking Loading
The process of collecting and maintaining pieces of code and
data into a single file is known as Linking in the operating Loading is the process of loading the program from secondary
system. memory to the main memory for execution.
Linking is used to join all the modules. Loading is used to allocate the address to all executable files
and this task is done by the loader.
• First-Fit
• This is a fairly straightforward technique where we start at the beginning and assign the
first hole, which is large enough to meet the needs of the process. The first-fit technique
can also be applied so that we can pick up where we left off in our previous search for the
first-fit hole.
• Best-Fit
• The goal of this greedy method, which allocates the smallest hole that meets the needs of
the process, is to minimise any memory that would otherwise be lost due to internal
fragmentation in the event of static partitioning. Therefore, in order to select the greatest
match for the procedure without wasting memory, we must first sort the holes according
to their diameters.
• Worst-Fit
• The Best-Fit strategy is in opposition to this one. The largest hole is chosen to be assigned
to the incoming process once the holes are sorted based on size. The theory behind this
allocation is that because the process is given a sizable hole, it will have a lot of internal
fragmentation left over. As a result, a hole will be left behind that can house a few
additional processes.
Fragmentation
• Fragmentation is an unwanted problem in the operating
system in which the processes are loaded and unloaded
from memory, and free memory space is fragmented.
Processes can't be assigned to memory blocks due to
their small size, and the memory blocks stay unused.
Types of Fragmentation
• Ease of Swapping: Individual pages can be moved between physical memory and
disk (swap space) without affecting the entire process, making swapping faster and
more efficient.
• Improved Security and Isolation: Each process works within its own set of pages,
preventing one process from accessing another’s memory space.
Disadvantages of Paging
• Internal Fragmentation: If the size of a process is not a perfect multiple of the page
size, the unused space in the last page results in internal fragmentation.
• Increased Overhead: Maintaining the Page Table requires additional memory and
processing. For large processes, the page table can grow significantly, consuming
valuable memory resources.
• Page Table Lookup Time: Accessing memory requires translating logical addresses to
physical addresses using the page table. This additional step increases memory access
time, although Translation Lookaside Buffers (TLBs) can help reduce the impact.
• I/O Overhead During Page Faults: When a required page is not in physical memory
(page fault), it needs to be fetched from secondary storage, causing delays and
increased I/O operations.
Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page
faults
0 is already there so —> 0 Page fault. when 3 came it will take the place of 7 because it
is not used for the longest duration of time in the future—> 1 Page fault. 0 is already
there so —> 0 Page fault. 4 will takes place of 1 —> 1 Page Fault.
Least Recently Used