0% found this document useful (0 votes)
29 views20 pages

Unit 4 Part3

Uploaded by

Lalitha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views20 pages

Unit 4 Part3

Uploaded by

Lalitha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 20

UNIT IV (Third Part – 3/3)

Memory Management
(Page Replacement)
Page Replacement
• When a process generates a page fault, the memory
manager must locate referenced page in secondary
storage, load it into page frame in main memory and
update corresponding page table entry
• Prevent over-allocation of memory by modifying
page-fault service routine to include page
replacement.
• Modified (dirty) bit
– Set to 1 if page has been modified; 0 otherwise
– Help systems quickly determine which pages have been
modified
Page Replacement
• Optimal page replacement strategy (OPT or MIN)
– Obtains optimal performance, replaces the page that
will not be referenced again until furthest into the
future
• A page-replacement strategy is characterized by
– Heuristic it uses to select a page for replacement
– The overhead it incurs
Page Replacement
Basic Page Replacement
• Find the location of the desired page on
disk
• Find a free frame:
- If there is a free frame, use it
- If there is no free frame, use a page
replacement algorithm to select a victim
frame
• Bring the desired page into the (newly) free
frame; update the page and frame tables
Existing page replacement
algorithms
• First-In-First-Out (FIFO) Algorithm
• Optimal Algorithm
• Least Recently Used (LRU) Algorithm
• LRU Approximation Algorithms
o Second chance algorithm
• Counting Algorithms
First-In-First-Out (FIFO)
Algorithm
• When a page must be replaced the oldest
page is chosen
• It maintains a FIFO queue to hold all pages
in memory
• Likely to replace heavily used pages
• Can be implemented with relatively low
overhead
• Impractical for most systems
First-In-First-Out (FIFO)
Algorithm
FIFO Anomaly
• Belady’s (or FIFO) Anomoly
– Certain page reference patterns actually cause
more page faults when number of page frames
allocated to a process is increased
FIFO Anomaly
Optimal Algorithm
• Replace page that will not be used for
longest period of time
• This algorithm guarantees the lowest
possible page fault rate for a fixed number
of frames.
Least Recently Used (LRU)
Algorithm
• LRU page replacement
– Exploits temporal locality by replacing the page that
has spent the longest time in memory without being
referenced
– Can provide better performance than FIFO
– Increased system overhead
– LRU can perform poorly if the least-recently used page
is the next page to be referenced by a program that is
iterating inside a loop that references several pages
Least Recently Used (LRU)
Algorithm
LRU Approximation Algorithms
- Second chance algorithm
• When a page has been selected, it will inspect the
reference bit
• If it is 0 not referenced, the page is replaced.
• If it is 1 (referenced) the page is given a second
chance and move on to select the next FIFO page
• When a page gets a second chance, its reference
bit is cleared and its arrival time is reset to the
current time.
LRU Approximation Algorithms
- Second chance algorithm
Counting Algorithms
• Keep a counter of the number of references
that have been made to each page
• LFU Algorithm: replaces page with
smallest count
• MFU Algorithm: based on the argument that
the page with the smallest count was
probably just brought in and has yet to be
used
Working Set Model
• For a program to run efficiently
– The system must maintain that program’s
favored subset of pages in main memory
• Otherwise
– The system might experience excessive paging
activity causing low processor utilization called
thrashing as the program repeatedly requests
pages from secondary storage
Working Set Model
• The process’s working set of pages, W(t, w), is the set
of pages referenced by the process during the process-
time interval t – w to t.
• The size of the process’s working set increases
asymptotically to the process’s program size as its
working set window increases
• As a process transitions between working sets, the
system temporarily maintains in memory pages that
are no longer in the process’s current working set
– Goal of working set memory management is to reduce this
misallocation
Thrashing
• The high paging activity is called thrashing. A
process is thrashing if it is spending more time in
paging than executing.
• If a process does not have “enough” pages, the
page-fault rate is very high. This leads to
– low CPU utilization
– operating system thinks that it needs to increase the
degree of multiprogramming
– another process added to the system
END

You might also like