0% found this document useful (0 votes)
18 views

Virtual Memory Class

This document discusses virtual memory and demand paging. It explains that virtual memory separates logical memory from physical memory to allow programs to access more memory than is physically available. When a process attempts to access a page that is not currently in memory, it causes a page fault. The operating system then selects a page using a page replacement algorithm, loads it into a free frame from disk, and resumes the process. Common page replacement algorithms like FIFO, LRU, and OPT are described along with their advantages and disadvantages.

Uploaded by

Viswanathan S
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views

Virtual Memory Class

This document discusses virtual memory and demand paging. It explains that virtual memory separates logical memory from physical memory to allow programs to access more memory than is physically available. When a process attempts to access a page that is not currently in memory, it causes a page fault. The operating system then selects a page using a page replacement algorithm, loads it into a free frame from disk, and resumes the process. Common page replacement algorithms like FIFO, LRU, and OPT are described along with their advantages and disadvantages.

Uploaded by

Viswanathan S
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 41

Virtual Memory

prepared and instructed by


Shmuel Wimer
Eng. Faculty, Bar-Ilan University

January 2017 Virtual Memory 1


Virtual Memory (VM) Concept
So far the entire logical address space was placed in
physical memory.
Requiring executed instructions be in physical memory is
necessary and reasonable.
It is also unfortunate, since it limits the size of a program
to the size of physical memory.
In many cases, the entire program is not needed.
•Code handling unusual error conditions.
•Arrays, lists, and tables are often allocated more
memory than actually needed.

January 2017 Virtual Memory 2


Even if the entire program is needed, it may not all be
needed at the same time.
Executing program only partially in memory would have
many benefits:
•Program no longer constrained by available physical
memory, simplifying programming.
•Programs using less physical memory enable more
programs running concurrently, increasing CPU
utilization and throughput.
•Less I/O needed to load or swap user programs into
memory, so user program would run faster.

January 2017 Virtual Memory 3


page swapping
logical addresses
main
memory hard
page in memory
disk

page
table

page in disk

VM that is larger than physical memory.


January 2017 Virtual Memory 4
Virtual Memory (VM) separates the logical memory as
perceived by users from the physical memory.

The virtual address space of a process is the logical view


of how a process is stored in memory.

Typically, a process begins at a certain logical address,


say 0, and exists in contiguous memory.

Physical memory is organized in frames assigned to


process, which may not be contiguous (MMU maps
logical pages to physical page frames in memory).

January 2017 Virtual Memory 5


The large blank space (hole) between the heap and the
stack is part of the virtual address space.
It will require actual physical pages only if the heap or
stack grows.

Virtual address space.


January 2017 Virtual Memory 6
Virtual address spaces that include holes can be filled as
the stack or heap grow or by dynamically link libraries
during program execution.
VM allows files and memory to be shared by processes
through page sharing.
System libraries can be shared by processes through
mapping of shared object into a virtual address space.
Each process considers the libraries as part of its virtual
address space, but their frames in physical memory are
shared by all processes.

January 2017 Virtual Memory 7


logical addresses logical addresses
(virtual, pages) (virtual , pages)
physical
addresses
(frames)

Shared library using virtual memory.


January 2017 Virtual Memory 8
Demand Paging

Instead of loading entire program in physical memory,


pages are loaded only as needed, called demand paging.
Pages never accessed are never loaded.
A demand-paging system is similar to a paging system
with swapping, where processes reside in disk.
When process swapped in, pager guesses which pages
will be used, before another process swapped out.
The pager brings only those pages into memory, thus
decreasing swap time and amount of physical memory.

January 2017 Virtual Memory 9


Need HW support to distinguish between pages in
memory and the pages on disk. Valid–invalid bit is used.

Valid means the page is both legal and in memory,


Invalid means page is either not in logical address space,
or currently on disk.

Entry for a page not currently in memory is either


marked invalid or contains the disk address of the page.

January 2017 Virtual Memory 10


Some pages are not in main memory.
January 2017 Virtual Memory 11
Steps in handling a page fault.
January 2017 Virtual Memory 12
Attempt to access a page marked invalid causes a page
fault.
Paging HW notice at translation that invalid bit is set,
causing OS trap of failure to bring page into memory.
The procedure for handling page fault is
•Check table (kept with PCB) for this process to
determine whether the reference was valid invalid
memory access.
•If invalid, process terminated. If valid but not yet
brought in, it is paged in.

January 2017 Virtual Memory 13


3. Find a free frame (taking one from the free-frame
list).
4. Schedule a disk operation to read the desired page
into newly allocated frame.
5. When disk read completes, modify the internal table
kept with PCB and the page table to indicate page is
now in memory.
6. Restart the instruction that was interrupted by the
trap. The process can now access the page.

January 2017 Virtual Memory 14


Performance of Demand Paging

January 2017 Virtual Memory 15


3. Determine that interrupt was a page fault.
4. Check legal page ref and determine disk location.
5. Issue a read from the disk to a free frame:
6. Wait in device queue until read request is serviced.
7. Wait for the device seek and/or latency time.
8. Begin the transfer of the page to a free frame.
• While waiting, allocate the CPU to some other user.
• Receive completion interrupt from I/O subsystem.
• Save registers and process state for the other user.

January 2017 Virtual Memory 16


January 2017 Virtual Memory 17
January 2017 Virtual Memory 18
Page Replacement

Need for page replacement.


January 2017 Virtual Memory 19
Page-fault service routine does:
1.Finds location of desired page on the disk.
2.Finds free frame:
•If there is, it is used.
•If not, replacement algorithm selects victim frame.
1.Victim written to disk, update page and frame tables.
•Read desired page into freed frame, update page and
frame tables.
•Continue process from where page fault occurred.
If no frames are free, two page transfers are required.

January 2017 Virtual Memory 20


physical frame
f occupied this
logic page v
i

logic page
caused fault

page table selected


after update for eviction
(paged out) faulted page will
occupy frame f
(paged in)
Page replacement.
January 2017 Virtual Memory 21
Usage of a dirty bit can save writing the victim back to
disk. Also applies to read-only pages (e.g. binary code).
Page replacement enables enormous VM for
programmers on a smaller physical memory.
Without demand paging, all the pages of a process must
be in physical memory. Two major problems:
Frame-allocation algorithm. How many frames to
allocate to each process?
Page-replacement algorithm. When page replacement
is required, select the frames to be replaced.

January 2017 Virtual Memory 22


January 2017 Virtual Memory 23
Page faults versus number of frames.
January 2017 Virtual Memory 24
FIFO Page Replacement
For the reference
7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1
and 3 frames, FIFO yields

FIFO is easy to implement but may suffer of poor


performance.

January 2017 Virtual Memory 25


To illustrate the problems with FIFO, consider the
reference 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5

Anomaly: the page-fault rate may increase as the


number of allocated frames increases!
January 2017 Virtual Memory 26
Optimal Page Replacement
Optimal replacement, called OPT or MIN, has the lowest
page-fault rate and does not suffer from anomaly:
Replace the page that will not be used for the longest
period of time.

Unfortunately, MIN requires future knowledge of


reference, so it is used mainly for comparison studies.
January 2017 Virtual Memory 27
LRU Page Replacement
Replace the page that has not been used for the longest
period of time, namely, the least recently used (LRU).
This is MIN looking backward in time.

LRU is popular and considered good, but its HW


implementation is expensive.
January 2017 Virtual Memory 28
Two SW implementations are possible:
Time-stamp associated with each page-table entry,
updated at page reference. Page with smallest is
replaced.
Stack. A referenced page is removed from the stack and
put on the top. MRU is at the top and LRU is at the
bottom (implementation with doubly linked list).
Both implementation require HW assistance, since
updated occurs at each memory reference.
Using interrupt for every reference to allow SW update
would slow memory reference by X10!
January 2017 Virtual Memory 29
LRU-Approximation

January 2017 Virtual Memory 30


We can either swap out all pages with the smallest value
or use FIFO method to choose among them.
The number of bits of history included in the shift
register can vary and is selected to make the updating as
fast as possible.
In the extreme case, the number can be reduced to zero,
leaving only reference bit.
Second-chance page-replacement algorithm. Basic FIFO
with inspection of reference bit.
If 0, replacement takes place, else, page is given a
second chance, moving to next FIFO page.
January 2017 Virtual Memory 31
Second chance clears the reference bit, and its arrival
time is reset to the current time.
Page given second chance will not be replaced until all
other pages are replaced or given second chances. Page
used often enough, will never be replaced.
Second-chance uses a circular queue, where pointer
indicates the next page to be replaced.
The pointer advances until it finds page with 0 reference
bit, clearing reference bits along advancements.
Once victim is found, it is replaced, and new page is
inserted in the circular queue in that position.
January 2017 Virtual Memory 32
Second-chance page-replacement algorithm.
January 2017 Virtual Memory 33
Enhanced second-chance algorithm uses two bits. Each
page is in one of four classes:
(0,0) Neither recent use nor modified, best to replace.
(0,1) Not recent use but modified, page need written out
before replaced.
(1,0) Recent use but clean, probably used again soon.
(1,1) Recent use and modified, probably used again
soon, page need written out before replaced.
Same scheme as second-chance, replacing the first page
encountered in the lowest nonempty class.

January 2017 Virtual Memory 34


Counting-Based Page Replacement
Least frequently used (LFU) replaces the page with
smallest count.
Problem: page used heavily initially but never used
again. Due to the large count it remains in memory
though not needed.
Solved by 1-bit shift right at regular intervals, forming
exponentially decaying average usage count.
Most frequently used (MFU) assumes that page with
smallest count was just brought in, yet to be used.
Both uncommon. Expensive, badly approximate OPT.

January 2017 Virtual Memory 35


Allocation of Frames
How to allocate the fixed amount of free memory
among processes?
The architecture (ISA) determines the minimum number
of frames allocated to process.
ISA with memory instructions of direct address requires
at least two frames, one for the instruction and one for
the memory reference.
ISA with indirect addressing require at least three
frames, e.g. load instruction on page 16 refers to
address on page 0, which is indirectly referring page 23.

January 2017 Virtual Memory 36


January 2017 Virtual Memory 37
Global and Local Allocation

Page-replacement algorithms are classified into:


Global: selecting replacement frame from all frames,
even if currently allocated to other process.
It allows high-priority process increasing its frames on
the account of a low-priority process.
Local: selecting from process’s own allocated frames.
Problem with global is that process’s pages in memory
depend not only on its paging behavior but also on other
processes.

January 2017 Virtual Memory 38


Thrashing
If the number of process’s allocated frames falls below
the minimum required by ISA, it must be suspended.
Its remaining pages are paged out and all its allocated
frames are freed, resulting swap in/out intermediate
CPU scheduling.
Process having not number of frames needed to support
pages in active use, will quickly page-fault, replacing
needed very soon.
This results in extensive paging, called thrashing, where
process spending more time paging than executing.

January 2017 Virtual Memory 39


A cause of trashing is the OS monitoring CPU utilization,
if too low it increases the multiprogramming.
Global page-replacement replaces pages regardless of
process to which they belong.
A process entering new execution phase needs more
frames, starting faulting, taking other processes’ frames.
As processes wait for the paging device, CPU utilization
decreases, causing CPU scheduler to increase
multiprogramming degree.
Eventually, no work is done, as processes just paging.

January 2017 Virtual Memory 40


Thrashing.
January 2017 Virtual Memory 41

You might also like