0% found this document useful (0 votes)
26 views75 pages

Chapter 4 OS, 2016

Uploaded by

Hana Yaregal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views75 pages

Chapter 4 OS, 2016

Uploaded by

Hana Yaregal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Chapter 4

Memory Management
Types of Memory

❏ ROM (Read Only Memory)

❏ RAM (Random Access memory)

❏ Cache memory

❏ Flash Memory
Cont ’d

ROM (Read Only Memory)

▪ Stores data without electrical current.


▪ It is sometimes called non-volatile memory as it is not erased when the system is
switched off.
Cont ’d

RAM (Random Access Memory)

▪ Any byte of memory can be accessed without touching the preceding bytes.
▪ RAM is the most common type of memory found in computers and other devices, such
as printers.
Cont’d
▪ Computer memory is a storage space that stores data and instructions for processing.

▪ It's a vital link between the computer's software and its CPU.

▪ Computer memory is made up of many small parts called cells.

▪ Each cell stores one or more transistors, which contain binary data.

▪ The cells have unique addresses, ranging from zero to the memory size minus one.

▪ Before a program can be run the program is loaded from the storage devices into

memory.
Cont ’d

▪ It allows the CPU to direct interact with the program.


▪ Memory is a need of every computer
Cont ’d

Cache Memory

▪ It stores frequently used computer programs, applications, and data.


▪ Provides high-speed data access to a processor.
▪ It acts as a buffer between the main memory (RAM) and the central processing unit
(CPU).
Cont ’d
Cont ’d

Flash Memory

▪ Non volatile memory can be erased electronically and rewritten similar to EEPROM.
▪ Introduced by Toshiba in 1984.
▪ Most computers use it to hold up their startup instructions.
▪ Also used in many mobiles , smart phones , digital cameras and PDA.
Swappin g

▪ Swapping is process of moving data between the random access memory (RAM) and
the storage (usually a hard disk(HDD) or SSD) to free up space in the RAM for other
processes.
▪ When the available RAM is insufficient to hold all the data and applications that are
actively in use, the operating system may use swapping to manage memory resources
more effectively.
Cont ’d

General swapping process


▪ Swap Out: When the operating system determines that certain data or program code in
RAM is not actively being used, it can transfer (or "page out") that data from RAM to a
designated space on the storage device, creating what is known as a "swap file" or
"page file.“
▪ Swap In: When the data that was swapped out is needed again, the operating system
can bring it back into RAM from the swap file. This is known as "paging in."
Cont ’d
Cont ’d

Virtual Memory
▪ It allows programs to run as if they have more memory than the physical RAM by
temporarily transferring data to and from a storage device.
▪ Virtual memory provides benefits in terms of costs, physical space, multitasking
capabilities, and data security.
▪ Each program has its own address space, which is broken into chunks called pages
• Each page is a contiguous range of addresses
• These pages are mapped onto physical memory to run
M e m o r y A l l o c at i o n a n d Pa r ti t i o n i n g

Memory allocation
▪ It is the process of assigning memory to a program or process.
▪ It involves reserving a block of memory of a certain size and returning a pointer to the
beginning of that block.
▪ The program can then use that memory for its own purposes.
Cont ’d

Memory partitioning
▪ It is the process of dividing the available memory into smaller partitions or sections.
▪ Each partition is assigned to a specific program or process.
▪ This allows multiple programs to run simultaneously, each with its own dedicated
section of memory.
M e m o ry m a n a g e me n t t e c h n i qu e s

✓ Contiguous and
✓ Non-contiguous
Cont ’d
Contiguous memory allocation
▪ Allocating a single, contiguous block of memory for a process or a data structure in a
computer's memory.

▪ A process is given a region of memory that is a contiguous sequence of addresses, meaning


that the addresses are consecutive and form a continuous block.

▪ Assume we have 100MB of physical memory and P1 requires 10MB, P2 requires


20MB, P3 requires 15MB and P4 requires 10MB
Cont ’d

Memory Status Memory Status Memory Status Memory Status


0-9 P1 0-9 P1
0-9 P1 0-9 free
10-29 P2 10-29 P2
10-99 free 10-29 P2
30-99 free 30-44 P3 30-44 free
(1)
(2) 45-99 free 45-54 P4
(3) 55-99 free
(4)
Assume P1 and P3 is finished before P4 loaded in to memory

Total free space =10 + 15 + 45 = 70MB


Cont ’d

▪ In the contiguous technique, the entire process must be loaded into the main memory for
execution.

▪ Contiguous memory allocation can be categorized into two ways :

✓ Fixed (static) partition scheme


✓ Variable (Dynamic) partition scheme.
Cont ’d

✓ Fixed partition scheme


▪ Fixed partitioning, also known as static partitioning, is a memory allocation
technique used in operating systems to divide the physical memory into fixed-size
partitions or regions, each assigned to a specific process or user.

▪ There are equal and unequal fixed partitioning schemes.

▪ Advantages
✓ simple and easy to implement
✓ it is predictable (the operating system can ensure a minimum amount of memory for
each process)
✓ it can prevent processes from interfering with each other’s memory space
Cont ’d

✓ Fixed partition scheme

▪ Disadvantages
✓ lead to internal fragmentation, where memory in a partition remains unused.
✓ limits the number of processes that can run concurrently,
Cont ’d

✓ Dynamic partition scheme


▪ used to alleviate the problem faced by Fixed Partitioning.

▪ In contrast with fixed partitioning, partitions are not made before the
execution or during system configuration.

▪ Initially, RAM is empty and partitions are made during the run-time
according to the process’s need instead of partitioning during system
configuration.

▪ The size of the partition will be equal to the incoming process.


Cont ’d

✓ Dynamic partition scheme


▪ The partition size varies according to the need of the process so that
internal fragmentation can be avoided to ensure efficient utilization of RAM.

Advantages

✓ No Internal Fragmentation
✓ A process can be loaded until the memory is empty.
✓ No Limitation on the Size of the Process
Cont ’d

✓ Dynamic partition scheme

Disadvantages
✓ Difficult Implementation
✓ External Fragmentation: There will be external fragmentation despite the
absence of internal fragmentation.
Fragmentation

▪ Fragmentation refers to the phenomenon in computing where the available memory


space becomes divided into small, non-contiguous blocks.
▪ This division can occur in both physical and logical memory and can lead to
inefficiencies in memory utilization.
▪ Two common types of fragmentation
• Internal fragmentation and
• External fragmentation
Cont ’d

Internal Fragmentation

▪ It typically happens when memory is allocated in fixed-size blocks, and the allocated
block is larger than what the program actually needs.
▪ Example: In a system using fixed-size memory pages, if a process requires 3 KB of
memory and is allocated a 4 KB block, there is 1 KB of internal fragmentation.

0K-4K P1
1KB space not used from the first block
4K-8K free

▪ Internal fragmentation can lead to inefficient use of memory, as valuable space is wasted within
allocated memory blocks.
Cont ’d

External Fragmentation

▪ Occurs when free memory blocks are scattered throughout the system, making it
difficult to allocate a contiguous block of memory to a process, even if the total free
space is sufficient.
▪ Example: Imagine a scenario where there is one free memory blocks of sizes 30KB, and
20KB (after termination), respectively. If a new process requires 50KB of contiguous
memory, it cannot be accommodated, even though the total free space is 50KB, due to
the scattered nature of the free blocks.
Cont ’d

External Fragmentation (Cont’d)

▪ Let say size of memory block, P1 used 20KB, P2 used 10KB, P6 used 60KB and P4
require 50KB

P1 terminated

Total free space


30KB+20KB = 50KB
Memory Allocation Algorithm

▪ First-fit

✓ The first partition that is large enough to accommodate the process is allocated.

✓ If not find other partition


▪ Best-fit
✓ The smallest partition that is large enough to accommodate the process is
allocated.
▪ Worst-fit
✓ The largest partition is allocated to the process.
Cont.

▪ P4 20B

✓ P4 Requests to memory

Where it allocates based on


P2 the above algorithms?
exit

30
Cont ’d

▪ Next-fit

✓ Next-fit begins to scan memory from the location of the last placement, and
chooses the next available block that is large

❖ Both the first-fit and best-fit strategies for memory allocation suffer from external
fragmentation.
Example
Given five memory partition of 100K, 500K, 200K, 300K and 600K. How would first-fit, best-
fit , worst-fit and next-fit algorithm places process of P1=210K, P2=410K, P3=110K and
P4=415K
100K

500K

200K

300K

600K

Physical Memory
Cont ’d

100K 100K 100K 100K

500K P1 500K P2 500K P2 500K P1

200K P3 200K P3 200K 200K P3

300K 300K P1 300K P3 300K

600K P2 600K P4 600K P1 600K P2


First fit Best fit Worst fit Next fit

P4 → starvation P4 → starvation P4 → starvation


Cont ’d
Non-contiguous memory allocation

▪ This method involves allocating memory in non-contiguous blocks.


▪ The available memory is divided into pages or segments, and each page or segment is
assigned to a process.
▪ The pages or segments can be allocated using one of the following methods:
✓ Paging: The memory is divided into fixed-size pages, and each page is assigned to a process.

✓ Segmentation: The memory is divided into variable-sized segments, and each segment is
assigned to a process.
Paging

▪ Paging involves breaking up the physical memory into fixed-size blocks called pages.
▪ Simultaneously, the logical address space is divided into blocks of the same size, known
as page frames.
▪ The operating system keeps a mapping table, known as the page table, to map logical
pages to physical page frames.
▪ Page Size: The size of each page is a critical parameter in paging. Common page sizes are
4 KB, 2 MB, or 4 MB.
Cont ’d
Cont.…

Logical address
from CPU point of view

37
Cont ’d
▪ Page Table: This table is used to translate logical addresses to physical addresses.
▪ It contains entries for each page, indicating the corresponding frame in physical
memory.
▪ A computers that generates 16 bit addresses from 0 to 64KB these are called the virtual
address

Page
Number Frame
Number
Memory Management Unit (MMU)

▪ Is a computer hardware component that handles memory and caching operations for
the processor.
▪ It's also known as a paged memory management unit (PMMU).
▪ The MMU is responsible for translating virtual addresses used by software to physical
addresses used in the memory system.
▪ It controls access to a computer's physical memory, allowing multiple processes to run
simultaneously without interfering with each other.
Cont ’d

▪ The MMU is usually integrated into the processor, but in some systems it occupies
a separate integrated circuit (IC).
Cont ’d

▪ These program-generated addresses are called virtual addresses and form the virtual
address space.
▪ When virtual memory is used, the virtual addresses do not go directly to the memory
bus.
▪ Instead, they go to an MMU (Memory Management Unit) that maps the virtual
addresses onto the physical memory addresses, as illustrated by previous figure
Example
❑ The computer has only 32KB of physical memory

✓ 64KB programs can be written they cannot be loaded into memory in their entirety and
run

✓ A copy of program core image up to 64KB must be present on disk so that pieces can be
brought in as needed

✓ Virtual address space is divided into fixed size units called pages

✓ Corresponding units in the physical memory called page frames

✓ Pages and page frames are generally same size

✓ With 64KB of virtual pages and 8 page frames


Cont ’d

✓ 0k -4k means that virtual or physical address in that are 0 to 4095

✓ 4k – 8k means 4096 - 8191

✓ When the program tries to access address ZERO using the instruction MOV REG 0

✓ Virtual address 0 is sent to MMU


Virtual Address Space
60K-64K X 15
Virtual Page
56K-60K X 14
52K-56K X 13
48K-52K X 12
Physical Memory Address
44K-58K 7 11
7 28K-32K
40K-44K X 10
6 24K-28K
36K-40K 5 9
5 20K-24K
32K-36K X 8
4 16K-20K
28K-32K X 7
3 12K-16K
24K-28K X 6
2 8K-12K
20K-24K 3 5
1 4K-8K
16K-20K 4 4
0 0K-4K
12K-16K 0 3
8K-12K 6 2
4K-8K 1 1
0K-4K Page Frame
2 0
Cont ’d

• MOVE REG 0

✓ Thus MMU has mapped all virtual address between zero and 4095 to physical address
8192 to 12287

✓ MOVE REG 8192 is effectively moved to 24576


Cont ’d

✓ As a third example, virtual address 20500 is 20 bytes from the start of virtual page 5
(virtual addresses 20480 to 24575) and maps onto physical address 12288 + 20 = 12308.

Page Number
Frame Number

MMU maps virtual address 20500 onto physical address 12308


Cont ’d

▪ By itself, this ability to map the 16 virtual pages onto any of the eight page frames by
setting the MMU’s map appropriately does not solve the problem that the virtual address
space is larger than the physical memory.

▪ Since we have only eight physical page frames, only eight of the virtual pages from the
previous figure are mapped onto physical memory.

▪ The others, shown as a cross in the figure, are not mapped. In the actual hardware, a
Present/absent bit keeps track of which pages are physically present in memory.

What happens if the program references an unmapped address, for example, by


using the instruction
MOV REG,32780
Cont ’d

▪ which is byte 12 within virtual page 8 (starting at 32768)? The MMU notices that the
page is unmapped (indicated by a cross in the figure) and causes the CPU to trap to the
operating system.

▪ This trap is called a page fault.


Cont ’d

▪ In a simple implementation, the mapping of virtual addresses onto physical addresses


can be summarized as follows:
▪ the virtual address is split into a virtual page number (high-order bits) and an offset
(low-order bits).
▪ The virtual page number is used as an index into the page table to find the entry for that
virtual page.
▪ From the page table entry, the page frame number (if any) is found.
Cont ’d

▪ The page frame number is attached to the high-order end of the offset,
replacing the virtual page number, to form a physical address that can be sent to
the memory.
Cont ’d

Structure of a Page Table Entry

▪ The exact layout of an entry in the page table is highly machine dependent,
▪ but the kind of information present is roughly the same from machine to machine.
▪ In Fig. 3-11 we present a sample page table entry.
▪ The size varies from computer to computer, but 32 bits is a common size.
▪ The most important field is the Page frame number.
Cont ’d
▪ The Protection bits tell what kinds of access are permitted.
✓ In the simplest form, this field contains 1 bit, with 0 for read/write and 1 for read only.
▪ The Modified and Referenced bits keep track of page usage.
✓ When a page is written to, the hardware automatically sets the Modified bit.
✓ If the page in it has been modified (i.e., is ‘‘dirty’’), it must be written back to the disk.
▪ The Referenced bit is set whenever a page is referenced, either for reading or for writing.
✓ Its value is used to help the operating system choose a page to evict when a page fault occurs.
▪ caching to be disabled feature is important for pages that map onto device registers rather than
memory.
Page Replacement Algorithms
▪ Page Fault: A page fault happens when a running program accesses a memory page that
is mapped into the virtual address space but not loaded in physical memory.
▪ Since actual physical memory is much smaller than virtual memory, page faults happen.
In case of a page fault, Operating System might have to replace one of the existing pages
with the newly needed page.
▪ Different page replacement algorithms suggest different ways to decide which page to
replace. The target for all algorithms is to reduce the number of page faults.
Cont’d
▪ Page replacement is a process that determines which page to remove to make
space for a new page in the main memory.
▪ Page replacement algorithms decide which page to remove.

Common Page Replacement Algorithms:


1. First In First Out (FIFO)
2. Optimal Page replacement
3. Least Recently Used
4. Most Recently Used (MRU)
Cont’d
First in first out (FIFO)

▪ FIFO is the simplest page replacement algorithm.


▪ In this algorithm, the operating system keeps track of all pages in the memory in a
queue, the oldest page is in the front of the queue.
▪ When a page needs to be replaced page in the front of the queue is selected for
removal.

Example : Consider page reference string 1, 3, 0, 3, 5, 6, 3 with 3 page


frames. Find the number of page faults.
Cont’d

Reference String 1 3 0 3 5 6 3
Page Faults → *
Frame 1 1 1 1 1 5 5 5
Page hit → h
Frame 2 3 3 3 3 6 6
Frame 3 0 0 0 0 3
* * * h * * *

Total Page Faults(*) = 6, Page Fault Ratio (PFR) = (6/7)*100 ≈ 86%


Total Page hit (h) = 1, Page Hit Ratio (PHR) = (1/7)*100 ≈ 14%
Cont’d
Optimal Page Replacement (OPR)

▪ In this algorithm, pages are replaced which would not be used for the longest
duration of time in the future.
Example: Consider the page references 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 3 with 4 page
frame. Find number of page fault.
Cont’d

Reference String 7 0 1 2 0 3 0 4 2 3 0 3 2 3
Frame 1 7 7 7 7 7 3 3 3 3 3 3 3 3 3
Frame 2 0 0 0 0 0 0 0 0 0 0 0 0 0
Frame 3 1 1 1 1 1 4 4 4 4 4 4 4
Frame 4 2 2 2 2 2 2 2 2 2 2 2
* * * * h * h * h h h h h h

Total Page Faults(*) = 6, Page Fault Ratio (PFR) = (6/14)*100 ≈ 43%


Total Page hit (h) = 8, Page Hit Ratio (PHR) = (8/14)*100 ≈ 57%
Cont’d
Least Recently used (LRU)

▪ As the name suggests, this algorithm is based on the strategy that whenever a page
fault occurs, the least recently used page will be replaced with a new page.
Example 1: Consider the page reference string 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 3 with
4 page frames. Find number of page faults.
Cont’d

Reference String 7 0 1 2 0 3 0 4 2 3 0 3 2 3
Frame 1 7 7 7 7 7 3 3 3 3 3 3 3 3 3
Frame 2 0 0 0 0 0 0 0 0 0 0 0 0 0
Frame 3 1 1 1 1 1 4 4 4 4 4 4 4
Frame 4 2 2 2 2 2 2 2 2 2 2 2
* * * * h * h * h h h h h h

Total Page Faults(*) = 6, Page Fault Ratio (PFR) = (6/14)*100 ≈ 43%


Total Page hit (h) = 8, Page Hit Ratio (PHR) = (8/14)*100 ≈ 57%
Cont’d
Example 2: Consider the page reference string 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 3 with 3
page frames. Find number of page faults.

Reference String 7 0 1 2 0 3 0 4 2 3 0 3 2 3
Frame 1 7 7 7 2 2 2 2 4 4 4 0 0 0 0
Frame 2 0 0 0 0 0 0 0 0 3 3 3 3 3
Frame 3 1 1 1 3 3 3 2 2 2 2 2 2
* * * * h * h * * * * h h h

Total Page Faults(*) = 9, Page Fault Ratio (PFR) = (9/14)*100 ≈ 64%


Total Page hit (h) = 8, Page Hit Ratio (PHR) = (5/14)*100 ≈ 36%
Cont’d
Most Recently used (MRU)

▪ In this algorithm, page will be replaced which has been used recently.
▪ Belady’s anomaly can occur in this algorithm.
Example 1: Consider the page reference string 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 3 with
4 page frames. Find number of page faults.
Cont’d

Reference String 7 0 1 2 0 3 0 4 2 3 0 3 2 3
Frame 1 7 7 7 7 7 7 7 7 7 7 7 7 7 7
Frame 2 0 0 0 0 3 0 4 4 4 4 4 4 4
Frame 3 1 1 1 1 1 1 1 1 1 1 1 1
Frame 4 2 2 2 2 2 2 3 0 3 2 3
* * * * h * * * h * * * * *

Total Page Faults(*) = 12, Page Fault Ratio (PFR) = (12/14)*100 ≈ 86%
Total Page hit (h) = 2, Page Hit Ratio (PHR) = (2/14)*100 ≈ 14%
Segmentation

▪ Segmentation is an alternative to contiguous memory allocation,


allowing for more flexible memory usage.

▪ A user program can be subdivided using segmentation, in which the


program and its associated data are divided into a number of segments .

▪ It is not required that all segments of all programs be of the same


length, although there is a maximum segment length.
Cont’d

▪ Because of the use of unequal-size segments, segmentation is similar to


dynamic partitioning.

▪ Analogous to paging, a simple segmentation scheme would make use of a


segment table for each process and a list of free blocks of main memory.

▪ The entry should also provide the length of the segment, to assure that
invalid addresses are not used.
Cont’d

▪ When a process enters the Running state, the address of its segment table is
loaded into a special register used by the memory management hardware.

▪ Consider an address of 𝑛 + 𝑚 bits, where the leftmost 𝑛 bits are the segment
number and the rightmost 𝑚 bits are the offset.
▪ Based on the figure, 𝑛 = 4 and 𝑚 = 12. Thus the maximum segment

size is 212 = 4096.

▪ The following steps are needed for address translation:

o Extract the segment number as the leftmost 𝑛 bits of the logical


address.

o Use the segment number as an index into the process segment


table to find the starting physical address of the segment.

o Compare the offset, expressed in the rightmost 𝑚 bits, to the


length of the segment. If the offset is greater than or equal to
the length, the address is invalid.
o The desired physical address is the sum of the starting physical address of the
segment plus the offset.

▪ In our example, we have the logical address 0001001011110000, which is segment


number 1, offset 752. Suppose that this segment is residing in main memory
starting at physical address 0010000000100000.

▪ Then the physical address is 0010000000100000 + 001011110000


0010001100010000
Working Set

▪ A working set is the set of pages in the main memory (RAM) that a process is
currently actively using.
▪ The working set represents the portion of a process's address space that is
frequently accessed during its execution.
▪ Working set model is based on the assumption of locality.
▪ This model uses a parameter, Δ, to define the working-set window.
▪ The idea is to examine the most recent page references.
Cont’d

▪ The set of pages in the most recent Δ page references is the working set (Figure 3.12).
▪ If a page is in active use, it will be in the working set.
▪ If it is no longer being used, it will drop from the working set Δ time units after its last
reference.
▪ Thus, the working set is an approximation of the program’s locality.
Cont’d

▪ The working set model is a memory management technique that determines how many
unique pages are in a locality.
▪ The purpose of the working set model is to decide the number of frames allocated to
each process.

Figure 3.12
Thrashin g

▪ If the process does not have the number of frames it needs to support pages in active
use, it will quickly page-fault.
▪ At this point, it must replace some page.
▪ However, since all its pages are in active use, it must replace a page that will be needed
again right away.
▪ Consequently, it quickly faults again, and again, and again, replacing pages that it must
bring back in immediately.
Cont’d

▪ This high paging activity is called thrashing.


▪ A process is thrashing if it is spending more time paging than executing.
▪ This working-set strategy prevents thrashing while keeping the degree of
multiprogramming as high as possible. Thus, it optimizes CPU utilization.
▪ To prevent thrashing, we must provide a process with as many frames as it needs.
Reading Assignment

❑ Consider the page reference string 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 3 with


3 page frames. Find number of page faults.

❑ Belady’s anomaly
❑ Clock Page Replacement Algorithm (CPR)
❑ Memory Management Unit(MMU)
❑ Different Types of Segment

You might also like