Virtual memory allows processes to execute without being completely loaded into physical memory. It abstracts memory into a large, uniform array that allows processes to easily share files and memory. With virtual memory, programs are no longer constrained by the amount of physical memory and more programs can run simultaneously with less I/O. Pages are loaded into memory on demand via demand paging to avoid loading unused pages. When a page fault occurs for an invalid page, it is paged into memory if valid. Common page replacement algorithms are FIFO, Optimal, and LRU which replace pages based on time since last use to make space for new pages on faults.
Introduction
Virtual Memory
Allows the execution of processes that are not completely
in memory
Abstracts main memory into an extremely large, uniform
array of storage
Allows processes to share files easily and to implement
shared memory
Benefits:
A program would no longer be constrained by the amount
of physical memory that is available
More programs could be run at the same time
Less I/O would be needed to load or swap each user
program into memory
3.
Virtual Memory
Involvesthe separation of logical memory
as perceived by users from physical
memory
Virtual address space
the logical (or virtual) view of how a process is
stored in memory
The hole in stack and heap is known as
sparse address space
Allows files and memory to be shared by two
or more processes through page sharing
System libraries can be shared by several
processes
enables processes to share memory
allow pages to be shared during process creation
Stack
heap
data
0
max
4.
Demand Paging
Whatis problem with loading entire program in
memory?
Demand paging
Pages are only loaded when they are demanded during
program execution
A paging system with swapping
System uses a lazy swapper
Never swaps a page into memory unless that page will be
needed
Use term pager, rather than swapper
Avoids reading into memory pages that will not be used
anyway
Decreases the swap time and the amount of physical
memory needed
5.
Concept of DemandPaging
Requires a mechanism to distinguish between the pages
that are in memory and the pages that are on the disk
The valid-invalid bit scheme is used
Valid - the associated page is both legal and in memory
Invalid - the page either is not valid or is valid but is currently on
the disk
0
1
2
3
4
5
A
B
C
D
E
F
0
1
2
3
4
5
4 V
I
6 V
I
I
9 V
0
1
2
3
4
5
6
7
8
9
10
A
C
F
A B
c D E
F
frame valid-invalid
bit
Logical
memory
Page table
physical
memory
Disk
6.
Concept of DemandPaging
If pages marked as
invalid and process
never accesses those
pages?
If a process access
memory resident pages,
execution proceeds
normally
Page-fault trap
The process tries to
access a page that was
not brought into memory
(Access to a page
marked invalid)
How to handle a page fault?
1. Check internal table to determine
whether reference is valid or
invalid memory reference
2. If invalid, terminate the process.
If valid, page it in.
1. Find a free frame
2. Read page into newly allocated
frame
3. Modify the internal table
4. Restart the instruction
7.
Pure Demand Paging
Never bring a page into memory until it is required
Executing a process with no pages in memory
Process faults for first page. Page is brought into memory
Process continues to execute
Faults as necessary until all pages are in memory. Process now
executes with no more page faults
Hardware
Page table
Secondary memory: a high speed disk
Holds those pages that are not present in main memory
Known as a swap device
The section of disk used: a swap space
Important requirement is the need to be able to restart
any instruction after a page fault
8.
Performance of DemandPaging
effective access time for a demand-paged memory
If no page fault,
effective access time = memory access time
If page fault occurs
effective access time = (1 - p) x ma + p x page fault time
p = Probability of a page fault
ma = memory access time
page fault time = time requires to service a page fault
It is important to keep the page-fault rate low in a
demand-paging system
Page Service routine
Consider ademand-paging system with a paging
disk that has an average access and transfer time of
20 ms. Addresses are translated through a page
table in main memory, with an access time of 1 us
per memory access. Thus, each memory reference
through the page table takes two accesses. To
improve this time, we have added an associative
memory that reduces access time to one memory
reference, if the page-table entry is in the associative
memory.
80% of access are in associative memory and those
of remaining 10%(2% of total) cause page fault.
What is effective memory access time?
11.
= (0.80 *1 us) + (0.18 * 2us) + (0.02 * 20,002 us)
= .8 us + .36 us + 400.04 us
= 401.2 us
12.
In demandpaging, system takes 100 ms to service a
page fault and 300 ms to replace a dirty page.
Memory access time is 1 ms.The probability of page
fault is P. In case of a page fault, the probability of
page being dirty is also P. It is observed that average
access time is 3 ms.
Calculate the value of P.
solution : EMAT= P(P*300+(1-p)100)+(1-P)1
P=0.0194
13.
Consider thepage fault service time of 10 ms in
computer with average memory access time being
20 ns. If one page fault is generated for every 106
memory access. Calculate effective memory access
time EMAT
Solution = (1-10-6 )*20nsec+(10-6 )*10
=30nec
14.
Need for PageReplacement
Over-allocation of memory
To increase degree of multiprogramming
On page fault, system finds no free frame to page in the
desired page
Possible solutions:
Process Termination
Swapping out a process and free all its frames
Page Replacement
15.
Page Replacement
“If noframe is free, find one that is not currently being
used and free it”
Page-fault service routine with page replacement
If no frames are free, two page transfers
Doubles the page-fault service time and increases the
effective access time
Use of a modify bit (or dirty bit) associated with each
page
When a page is selected for replacement
If set- indicating that the page has been modified since it was
read into memory, must write that page to the disk
If not set - the page has not been modified since it was read into
memory
Page & FrameReplacement Algorithms
Frame allocation algorithm
Decides how many frames to allocate to each process
Page replacement algorithm
select the frames that are to be replaced
Several algorithms are there
Evaluation is carried out by running algorithm on a reference
string and compute the number of page faults
We consider only page numbers rather than entire
address
If we have a reference to a page p, then any immediately
following references to page p will never cause a page
fault
18.
Page & FrameReplacement Algorithms
E.g. for a process we have a sequence of address
0100, 0432, 0101, 0612, 0102, 0103, 0104, 0101, 0611, 0102, 0103, 0104, 0101,
0610, 0102, 0103, 0104, 0101, 0609, 0102, 0105
at 100 bytes per page. We reduce this sequence to reference string
as
1,4,1,6,1,6,1,6,1,6,1
To determine #page faults for particular reference string and
page replacement algorithm
Must know the number of page frames available
19.
FIFO Page ReplacementAlgorithm
associates with each page the time when that page
was brought into memory
“The oldest page is chosen to replace”
We have
A reference string
7, 0,1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2,1, 2, 0, 1, 7, 0,1
Three frames initially empty
7 7
0
7
0
1
7 0 1
2
0
1
2 0 3
2
3
1
0 4 2
4
2
0
2
3
0
4
3
0
3
4
2
3
0
0
2
3
3 2 1
0
1
3
2
0
1
2
0 1 7
7
1
2
0
7
0
2
1
7
0
1
Page
Fram
e
Total Page faults =
15
20.
Belady’s Anomaly
FIFOpage-replacement algorithm is easy to understand
Its performance is not always good
Possible problem with FIFO page replacement
As we increase no. of page frames, page fault increases
unexpectedly: Belady’s Anomaly
21.
Optimal Page Replacement
Has the lowest page-fault rate of all algorithms
Never suffer from Belady's anomaly
Such an algorithm does exist and has been called OPT or
MIN
“Replace the page that will not be used for the longest
period of time.”
Three page frames(empty initially) and a reference string
7, 0,1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2,1, 2, 0, 1, 7, 0,1
7 7
0
7
0
1
7 0 1
Page
Fram
e
2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1
2
0
1
2
0
3
2
4
3
2
0
3
2
0
1
7
0
1
Total Page faults = 9
22.
Optimal Page Replacement
Guarantees the lowest possible page fault rate
for a fixed number of frames
Difficult to implement,
Requires future knowledge of the reference string.
Used mainly for comparison studies.
23.
LRU Page Replacement
FIFOalgorithm uses the time when a page was brought into memory
OPT algorithm uses the time when a page is to be used.
Use the recent past as an approximation of the near
future
Associates with each page the time of that page's last
use
“Replace the page that has not been used for the longest
period of time”
The optimal page-replacement algorithm looking backward
in time
7 7
0
7
0
1
7 0 1
Page
Fram
e
2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1
2
0
1
2
0
3
4
0
3
0
3
2
1
3
2
1
0
7
Total Page faults =
12
4
0
2
4
3
2
1
0
2
24.
LRU Page Replacement
The major problem to implement LRU:
How to implement LRU replacement?
How to determine an order for the frames defined by the
time of last use?
Solution:
Counters
An association of a time-of-use field in page table entry
Addition of logical clock or counter
We replace the page with the smallest time value
Stack
Keeping a stack of page numbers
Whenever a page is referenced, it is removed from the stack
and put on the top
Does not suffer from Belady's anomaly
25.
LRU Page Replacement
OPT and LRU belongs to Stack algorithm
Can never exhibit Belady's anomaly
Algorithm:
The set of pages in memory for n frames is always a
subset of the set of pages that would be in memory with n
+ 1 frames
LRU Approximation PageReplacement
To implement LRU page replacement, sufficient
hardware support is required
For system which does not supporting such support
Either other page replacement algorithms must be used
Or provides help in the form of a reference bit
Is set for a page by hardware whenever the page is referenced .
Reference bits are associated with each entry in the page
table.
We can determine which pages have been used and
which have not been used by examining the
reference bits
We do not know the order of use
28.
Additional Reference BitsAlgorithm
Additional ordering information can be gained by
recording the reference bits regular intervals
After time interval OS:
Shifts the reference bit for each page into the high-order
bit of its 8-bit byte
Shifts the other bits right by 1 bit
Discards the low-order bit
Contains the history of page use for the last eight time
periods
The numbers are not guaranteed to be unique
29.
Second Chance PageReplacement
Derived from FIFO page replacement algorithm
Second chance algorithm
When a page is selected for replacement, check the
reference bit
If 0 replace this page
If 1 give the page a second chance and move on to select
the next FIFO page
When page gets a second time,
Its reference bit is cleared & its arrival time is reset to the
current time
Clock algorithm
Implemented using a circular queue
Enhanced Second ChancePage
Replacement
Considers the reference bit and the modify bit as an
ordered pair
(0, 0) neither recently used nor modified-best!
(0, 1) not recently used but modified - not quite as good
(1, 0) recently used but clean - probably will be used again
soon
(1,1) recently used and modified - probably will be used again
soon, and the page will be need to be written out to disk before
it can be replaced
Requires three loops
1. Cycle through buffer for (0, 0). If one found, use that page.
2. Cycle through buffer for (0, 1). Set the reference bit to zero for all
page bypassed
3. If above step failed, all reference bits will be zero and repetition of
step 1 and 2 guarantees to find page for replacement
32.
Counting Based PageReplacement
Keep a counter of the number of references that
have been made to each page
Least Frequently Used (LFU) Page Replacement
Requires that the page with the smallest count be replaced
Most Frequently Used (MFU) Page Replacement
Requires that the page with the largest count be replaced
33.
Allocation of Frames
How do we allocate the fixed amount of free memory
among the various processes?
allocate at least a minimum number of frames
Number of frames the page fault rate
Allocation Algorithms
Equal Allocation
Proportional Allocation
𝑠𝑖𝑧𝑒 𝑜𝑓 𝑣𝑖𝑟𝑡𝑢𝑎𝑙 𝑚𝑒𝑚𝑜𝑟𝑦 𝑆 = 𝑆𝑖
𝑁𝑜. 𝑜𝑓 𝑓𝑟𝑎𝑚𝑒𝑠 𝑎𝑙𝑙𝑜𝑐𝑎𝑡𝑒𝑑 𝑡𝑜 𝑃𝑖 =
𝑆𝑖
𝑆×𝑚
34.
Global and LocalAllocation
Page replacement
Local replacement
Requires that each process select from only its own set of
allocated frames
The number of frames allocated to a process does not change
Might hinder a process
Global replacement
Allows a process to select a replacement frame from the
set of all frames
Increases the number of frames allocated to it
A process cannot control its own page-fault rate
Results in greater system throughput
35.
Thrashing
What isthrashing?
If a process does not have ''enough" frames?
High paging activity – thrashing
A process is thrashing if it is spending more time
paging than executing
Why does thrashing occur?
OS increases degree of multiprogramming to increase CPU
utilization
global page-replacement algorithm is used
Processes start faulting for pages
faulting processes must use the paging device to swap pages in and
out
ready queue empties
CPU utilization decreases
Thrashing has occurred and system throughput plunges
Page fault rate increases tremendously
36.
Thrashing
How tolimit effect of thrashing?
By using a local replacement algorithm (or priority
replacement algorithm)
It cannot steal frames from another process
But average service time for a page fault will increase
The effective access time will increase even for a process that is
not thrashing
To prevent thrashing
We must provide a process with as many frames as it
needs
How many frames a process needs?
Working Set Strategy: defines the locality model of a process
execution
37.
Working Set Model
Based on the assumption of locality
Examines the most recent ∆ page references
∆ = working set window
Set of pages in the most recent ∆ page references:
working set
Approximation of the program's locality
Total demand for frames D = 𝑊𝑆𝑆𝑖
If D > m , thrashing will occur
OS monitors the working set of each process
If the sum of the working-set sizes increases?
OS selects a process to suspend
Pages are written out (swapped),
Its frames are reallocated to other processes
. . . 2 6 1 5 7 7 7 7 5 1 6 2 3 4 1 2 3 4 4 4 3 4 3 4 4 4 1 3 2 3 4 4 4 3 4 4 4…
A
t1
WS(t1) = {1,2,5,6,7}
t2
WS(t2) = {3,4}
38.
Working Set Model
Optimizes CPU utilization
By preventing thrashing while keeping the degree of
multiprogramming as high as possible
Difficulty in keeping track of the working set
working-set window is a moving window
A page is in the working set if it is referenced anywhere in
the working-set window
39.
Keeping Track ofthe Working Set
Approximate with interval timer + a reference bit
E.g. ∆ = 10000
Timer interrupts after every 5000 time units
Keep 2 bits in memory for each page
Whenever a timer interrupts copy and sets the values of
all reference bits to 0
If one of the bits in memory = 1 page in in working set
40.
Page Fault Frequency
More direct approach than WSS
Establish “acceptable” page-fault frequency (PFF)
rate
Uses a local replacement policy
If actual rate too low process loses frames
If actual rate too high process gains frames