Virtual Memory
Management
B.Ramamurthy
Chapter 8
Virtual memory
Consider a typical, large application:
 There are many components that are mutually
exclusive. Example: A unique function selected
dependent on user choice.
 Error routines and exception handlers are very
rarely used.
 Most programs exhibit a slowly changing
locality of reference. There are two types of
locality: spatial and temporal.
Characteristics of Paging
and Segmentation
Memory references are dynamically
translated into physical addresses at run time
 a process may be swapped in and out of main
memory such that it occupies different regions
A process may be broken up into pieces
(pages or segments) that do not need to be
located contiguously in main memory
Hence: all pieces of a process do not need to
be loaded in main memory during execution
 computation may proceed for some time if the
next instruction to be fetch (or the next data to be
accessed) is in a piece located in main memory
Process Execution
The OS brings into main memory only a
few pieces of the program (including its
starting point)
Each page/segment table entry has a
present bit that is set only if the
corresponding piece is in main memory
The resident set is the portion of the
process that is in main memory
An interrupt (memory fault) is generated
when the memory reference is on a piece
not present in main memory
Process Execution (cont.)
OS places the process in a Blocking state
OS issues a disk I/O Read request to bring
into main memory the piece referenced to
another process is dispatched to run while
the disk I/O takes place
an interrupt is issued when the disk I/O
completes
 this causes the OS to place the affected process
in the Ready state
Advantages of Partial
Loading
More processes can be maintained in
main memory
 only load in some of the pieces of each
process
 With more processes in main memory, it
is more likely that a process will be in the
Ready state at any given time
A process can now execute even if it is
larger than the main memory size
 it is even possible to use more bits for
logical addresses than the bits needed
for addressing the physical memory
Virtual Memory: large as you
wish!
 Ex: 16 bits are needed to address a physical
memory of 64KB
 lets use a page size of 1KB so that 10 bits are
needed for offsets within a page
 For the page number part of a logical address
we may use a number of bits larger than 6, say
22 (a modest value!!)
The memory referenced by a logical
address is called virtual memory
 is maintained on secondary memory (ex: disk)
 pieces are bring into main memory only when
needed
Virtual Memory (cont.)
 For better performance, the file system is
often bypassed and virtual memory is stored
in a special area of the disk called the swap
space
 larger blocks are used and file lookups and
indirect allocation methods are not used
By contrast, physical memory is the
memory referenced by a physical address
 is located on DRAM
The translation from logical address to
physical address is done by indexing the
appropriate page/segment table with the
help of memory management hardware
Possibility of trashing
To accommodate as many processes as
possible, only a few pieces of each process is
maintained in main memory
But main memory may be full: when the OS
brings one piece in, it must swap one piece out
The OS must not swap out a piece of a process
just before that piece is needed
If it does this too often this leads to trashing:
 The processor spends most of its time swapping
pieces rather than executing user instructions
Locality
Temporal locality: Addresses that are referenced
at some time Ts will be accessed in the near
future (Ts + delta_time) with high probability.
Example : Execution in a loop.
Spatial locality: Items whose addresses are near
one another tend to be referenced close together
in time. Example: Accessing array elements.
How can we exploit this characteristics of
programs? Keep only the current locality in the
main memory. Need not keep the entire program
in the main memory.
Locality and Virtual Memory
Principle of locality of references: memory
references within a process tend to cluster
Hence: only a few pieces of a process will
be needed over a short period of time
Possible to make intelligent guesses about
which pieces will be needed in the future
This suggests that virtual memory may
work efficiently (ie: trashing should not
occur too often)
Space and Time
Storage
capacity
Access time
CPU
cache Main
memory
Secondary
Storage
Cost/byte
Desirable
increasing
Demand paging
Main memory (physical address space) as well as
user address space (virtual address space) are
logically partitioned into equal chunks known as
pages. Main memory pages (sometimes known as
frames) and virtual memory pages are of the same
size.
Virtual address (VA) is viewed as a pair (virtual page
number, offset within the page). Example: Consider a
virtual space of 16K , with 2K page size and an
address 3045. What the virtual page number and
offset corresponding to this VA?
Virtual Page Number and
Offset
3045 / 2048 = 1
3045 % 2048 = 3045 - 2048 = 997
VP# = 1
Offset within page = 997
Page Size is always a power of 2? Why?
Page Size Criteria
Consider the binary value of address 3045 :
1011 1110 0101
for 16K address space the address will be 14
bits. Rewrite:
00 1011 1110 0101
A 2K address space will have offset range 0 -
2047 (11 bits)
Offset within page
Page#
001 011 1110 0101
Demand paging (contd.)
There is only one physical address space but as many
virtual address spaces as the number of processes in
the system. At any time physical memory may
contain pages from many process address space.
Pages are brought into the main memory when
needed and “rolled out” depending on a page
replacement policy.
Consider a 8K main (physical) memory and three
virtual address spaces of 2K, 3K and 4K each. Page
size of 1K. The status of the memory mapping at
some time is as shown.
Demand Paging (contd.)
0
1
2
3
4
5
6
7
Main memory
LAS 0
LAS 1
LAS 2
(Physical Address Space -PAS)
LAS - Logical Address Space
Executable
code space
Issues in demand paging
How to keep track of which logical page goes
where in the main memory? More specifically,
what are the data structures needed?
 Page table, one per logical address space.
How to translate logical address into physical
address and when?
 Address translation algorithm applied every time a
memory reference is needed.
How to avoid repeated translations?
 After all most programs exhibit good locality. “cache
recent translations”
Issues in demand paging
(contd.)
What if main memory is full and your process
demands a new page? What is the policy for
page replacement? LRU, MRU, FIFO, random?
Do we need to roll out every page that goes
into main memory? No, only the ones that
are modified. How to keep track of this info
and such other memory management
information? In the page table as special bits.
Support Needed for
Virtual Memory
Memory management hardware must
support paging and/or segmentation
OS must be able to manage the movement
of pages and/or segments between
secondary memory and main memory
We will first discuss the hardware aspects;
then the algorithms used by the OS
Paging
Each page table entry contains a present bit to indicate
whether the page is in main memory or not.
 If it is in main memory, the entry contains the frame
number of the corresponding page in main memory
 If it is not in main memory, the entry may contain the
address of that page on disk or the page number may
be used to index another table (often in the PCB) to
obtain the address of that page on disk
Typically, each process has its own page table
Paging
A modified bit indicates if the page has
been altered since it was last loaded into
main memory
 If no change has been made, the page does
not have to be written to the disk when it
needs to be swapped out
Other control bits may be present if
protection is managed at the page level
 a read-only/read-write bit
 protection level bit: kernel page or user
page (more bits are used when the
processor supports more than 2 protection
levels)
Page Table Structure
Page tables are variable in length
(depends on process size)
 then must be in main memory instead of
registers
A single register holds the starting
physical address of the page table of
the currently running process
Address Translation in a Paging
System
Sharing Pages
If we share the same code among different
users, it is sufficient to keep only one copy in
main memory
Shared code must be reentrant (ie: non self-
modifying) so that 2 or more processes can
execute the same code
If we use paging, each sharing process will have
a page table who’s entry points to the same
frames: only one copy is in main memory
But each user needs to have its own private data
pages
Sharing Pages: a text editor
Translation Lookaside Buffer
Because the page table is in main memory,
each virtual memory reference causes at
least two physical memory accesses
 one to fetch the page table entry
 one to fetch the data
To overcome this problem a special cache is
set up for page table entries
 called the TLB - Translation Lookaside Buffer
 Contains page table entries that have been most
recently used
 Works similar to main memory cache
Translation Lookaside Buffer
Given a logical address, the processor examines
the TLB
If page table entry is present (a hit), the frame
number is retrieved and the real (physical)
address is formed
If page table entry is not found in the TLB (a
miss), the page number is used to index the
process page table
 if present bit is set then the corresponding frame is
accessed
 if not, a page fault is issued to bring in the referenced
page in main memory
The TLB is updated to include the new page entry
Use of a Translation Lookaside
Buffer
TLB: further comments
TLB use associative mapping hardware to
simultaneously interrogates all TLB entries
to find a match on page number
The TLB must be flushed each time a new
process enters the Running state
The CPU uses two levels of cache on each
virtual memory reference
 first the TLB: to convert the logical address to
the physical address
 once the physical address is formed, the CPU
then looks in the cache for the referenced word
Page Tables and Virtual
Memory
Most computer systems support a very large
virtual address space
 32 to 64 bits are used for logical addresses
 If (only) 32 bits are used with 4KB pages, a page
table may have 2^{20} entries
The entire page table may take up too much main
memory. Hence, page tables are often also stored
in virtual memory and subjected to paging
 When a process is running, part of its page table
must be in main memory (including the page table
entry of the currently executing page)
Inverted Page Table
Another solution (PowerPC, IBM Risk 6000) to the
problem of maintaining large page tables is to use
an Inverted Page Table (IPT)
We generally have only one IPT for the whole
system
There is only one IPT entry per physical frame
(rather than one per virtual page)
 this reduces a lot the amount of memory
needed for page tables
The 1st entry of the IPT is for frame #1 ... the nth
entry of the IPT is for frame #n and each of these
entries contains the virtual page number
Thus this table is inverted
Inverted Page Table
The process ID with the
virtual page number could
be used to search the IPT
to obtain the frame #
For better performance,
hashing is used to obtain a
hash table entry which
points to a IPT entry
 A page fault occurs if
no match is found
 chaining is used to
manage hashing
overflow d = offset within page
The Page Size Issue
Page size is defined by hardware; always a
power of 2 for more efficient logical to
physical address translation. But exactly
which size to use is a difficult question:
 Large page size is good since for a small page
size, more pages are required per process
 More pages per process means larger page tables.
Hence, a large portion of page tables in virtual memory
 Small page size is good to minimize internal
fragmentation
 Large page size is good since disks are designed
to efficiently transfer large blocks of data
 Larger page sizes means less pages in main
memory; this increases the TLB hit ratio
The Page Size Issue
With a very small page
size, each page matches
the code that is actually
used: faults are low
Increased page size
causes each page to
contain more code that
is not used. Page faults
rise.
Page faults decrease if
we can approach point P
were the size of a page
is equal to the size of
the entire process
The Page Size Issue
Page fault rate is also
determined by the
number of frames
allocated per process
Page faults drops to a
reasonable value when
W frames are allocated
Drops to 0 when the
number (N) of frames
is such that a process
is entirely in memory
The Page Size Issue
Page sizes from 1KB to 4KB are most
commonly used
But the issue is non trivial. Hence
some processors are now supporting
multiple page sizes. Ex:
 Pentium supports 2 sizes: 4KB or 4MB
 R4000 supports 7 sizes: 4KB to 16MB
Operating System Software
Memory management software depends
on whether the hardware supports paging
or segmentation or both
Pure segmentation systems are rare.
Segments are usually paged -- memory
management issues are then those of
paging
We shall thus concentrate on issues
associated with paging
To achieve good performance we need a
low page fault rate
Fetch Policy
Determines when a page should be brought
into main memory. Two common policies:
 Demand paging only brings pages into main
memory when a reference is made to a
location on the page (ie: paging on demand
only)
 many page faults when process first started but
should decrease as more pages are brought in
 Prepaging brings in more pages than needed
 locality of references suggest that it is more
efficient to bring in pages that reside
contiguously on the disk
 efficiency not definitely established: the extra
pages brought in are “often” not referenced
Placement policy
Determines where in real memory a
process piece resides
For pure segmentation systems:
 first-fit, next fit... are possible choices (a real
issue)
For paging (and paged segmentation):
 the hardware decides where to place the page:
the chosen frame location is irrelevant since all
memory frames are equivalent (not an issue)
Replacement Policy
Deals with the selection of a page in
main memory to be replaced when a
new page is brought in
This occurs whenever main memory is
full (no free frame available)
Occurs often since the OS tries to
bring into main memory as many
processes as it can to increase the
multiprogramming level
Replacement Policy
Not all pages in main memory can be
selected for replacement
Some frames are locked (cannot be paged
out):
 much of the kernel is held on locked frames as
well as key control structures and I/O buffers
The OS might decide that the set of pages
considered for replacement should be:
 limited to those of the process that has suffered
the page fault
 the set of all pages in unlocked frames
Replacement Policy
The decision for the set of pages to be
considered for replacement is related to
the resident set management strategy:
 how many page frames are to be allocated to
each process? We will discuss this later
No matter what is the set of pages
considered for replacement, the
replacement policy deals with algorithms
that will choose the page within that set
Basic algorithms for the replacement
policy
The Optimal policy selects for
replacement the page for which the time
to the next reference is the longest
 produces the fewest number of page faults
 impossible to implement (need to know the
future) but serves as a standard to compare
with the other algorithms we shall study:
 Least recently used (LRU)
 First-in, first-out (FIFO)
 Clock
The LRU Policy
Replaces the page that has not been referenced
for the longest time
 By the principle of locality, this should be the page
least likely to be referenced in the near future
 performs nearly as well as the optimal policy
Example: A process of 5 pages with an OS that
fixes the resident set size to 3
Note on counting page
faults
When the main memory is empty, each new
page we bring in is a result of a page fault
For the purpose of comparing the different
algorithms, we are not counting these initial
page faults
 because the number of these is the same for all
algorithms
But, in contrast to what is shown in the
figures, these initial references are really
producing page faults
Implementation of the LRU
Policy
Each page could be tagged (in the page table
entry) with the time at each memory
reference.
The LRU page is the one with the smallest time
value (needs to be searched at each page fault)
This would require expensive hardware and a
great deal of overhead.
Consequently very few computer systems
provide sufficient hardware support for true
LRU replacement policy
Other algorithms are used instead
The FIFO Policy
Treats page frames allocated to a
process as a circular buffer
 When the buffer is full, the oldest page is
replaced. Hence: first-in, first-out
 This is not necessarily the same as the LRU
page
 A frequently used page is often the oldest, so
it will be repeatedly paged out by FIFO
 Simple to implement
 requires only a pointer that circles through
the page frames of the process

More Related Content

PPT
unit-4 class (2).ppt,Memory managements part-1
PPT
Ch9 OS
 
PPT
PPT
Memory+management
PPT
Chapter 8 - Main Memory
PPT
Virtual memory
unit-4 class (2).ppt,Memory managements part-1
Ch9 OS
 
Memory+management
Chapter 8 - Main Memory
Virtual memory

Similar to memory management and Virtual Memory.ppt (20)

PPT
Main memory os - prashant odhavani- 160920107003
DOCX
virtual memory
PPTX
Unit 5Memory management.pptx
PPT
Memory comp
PPT
Cs416 08 09a
PPTX
Computer architecture virtual memory
PPTX
Memory Management
PPT
PPT
Chap8 Virtual Memory. 1997-2003.ppt
PDF
Operating system Memory management
PPTX
Paging +Algorithem+Segmentation+memory management
PPT
Memory management
PPT
Memory management ppt coa
PDF
Lecture notes for Memory, Scheduler, Boot Sequence
ODP
Unix Memory Management - Operating Systems
PPT
Linux Memory
PPT
Chapter 8 - Virtual memory - William stallings.ppt
PPT
Bab 4
 
PPTX
Os unit 2
PPT
Unit 5 Memory management System in OS.ppt
Main memory os - prashant odhavani- 160920107003
virtual memory
Unit 5Memory management.pptx
Memory comp
Cs416 08 09a
Computer architecture virtual memory
Memory Management
Chap8 Virtual Memory. 1997-2003.ppt
Operating system Memory management
Paging +Algorithem+Segmentation+memory management
Memory management
Memory management ppt coa
Lecture notes for Memory, Scheduler, Boot Sequence
Unix Memory Management - Operating Systems
Linux Memory
Chapter 8 - Virtual memory - William stallings.ppt
Bab 4
 
Os unit 2
Unit 5 Memory management System in OS.ppt
Ad

More from ssuser09d6cd1 (12)

PPT
NOV11 virtual memory_Engineering_CSE.ppt
PPT
ch9_Engineering Presentation on CSE-.ppt
PPT
ch8_Engineering presentation for cse.ppt
PPT
unit3-filemanagement-210129053915_01.ppt
PPTX
Memory_Management_Diagrams and virtual memory.pptx
PPT
ch8 Memory management and Virtual Memory.ppt
PPT
ch9 memory Management and virtual memory.ppt
PPT
Chapter 5-Process Synchronization OS.ppt
PPT
Chapter 5-Process Synchronization (Unit 2).ppt
PPT
Chapter 5-Process Synchronization 22.ppt
PPT
Chapter 5-Process Synchronization (Unit 2).ppt
PPT
Operating Systems Chapter-6 power PointT
NOV11 virtual memory_Engineering_CSE.ppt
ch9_Engineering Presentation on CSE-.ppt
ch8_Engineering presentation for cse.ppt
unit3-filemanagement-210129053915_01.ppt
Memory_Management_Diagrams and virtual memory.pptx
ch8 Memory management and Virtual Memory.ppt
ch9 memory Management and virtual memory.ppt
Chapter 5-Process Synchronization OS.ppt
Chapter 5-Process Synchronization (Unit 2).ppt
Chapter 5-Process Synchronization 22.ppt
Chapter 5-Process Synchronization (Unit 2).ppt
Operating Systems Chapter-6 power PointT
Ad

Recently uploaded (20)

PPTX
Soft Skills Unit 2 Listening Speaking Reading Writing.pptx
PDF
1.-fincantieri-investor-presentation2.pdf
PPTX
Software-Development-Life-Cycle-SDLC.pptx
PDF
Application of smart robotics in the supply chain
PPTX
Unit I - Mechatronics.pptx presentation
PDF
PhD defense presentation in field of Computer Science
PDF
B461227.pdf American Journal of Multidisciplinary Research and Review
PDF
CBCN cam bien cong nghiep bach khoa da năng
PDF
THE PEDAGOGICAL NEXUS IN TEACHING ELECTRICITY CONCEPTS IN THE GRADE 9 NATURAL...
PPTX
ARCHITECTURE AND PROGRAMMING OF EMBEDDED SYSTEMS
PPTX
MODULE 3 SUSTAINABLE DEVELOPMENT GOALSPPT.pptx
PPTX
IOP Unit 1.pptx for btech 1st year students
PDF
ASPEN PLUS USER GUIDE - PROCESS SIMULATIONS
PDF
MACCAFERRY GUIA GAVIONES TERRAPLENES EN ESPAÑOL
PDF
Module 1 part 1.pdf engineering notes s7
PPTX
MODULE 02 - CLOUD COMPUTING-Virtual Machines and Virtualization of Clusters a...
PDF
Recent Trends in Network Security - 2025
PDF
ANTIOXIDANT AND ANTIMICROBIAL ACTIVITIES OF ALGERIAN POPULUS NIGRA L. BUDS EX...
PPTX
Cloud Security and Privacy-Module-1.pptx
PPTX
highway-150803160405-lva1-app6891 (1).pptx
Soft Skills Unit 2 Listening Speaking Reading Writing.pptx
1.-fincantieri-investor-presentation2.pdf
Software-Development-Life-Cycle-SDLC.pptx
Application of smart robotics in the supply chain
Unit I - Mechatronics.pptx presentation
PhD defense presentation in field of Computer Science
B461227.pdf American Journal of Multidisciplinary Research and Review
CBCN cam bien cong nghiep bach khoa da năng
THE PEDAGOGICAL NEXUS IN TEACHING ELECTRICITY CONCEPTS IN THE GRADE 9 NATURAL...
ARCHITECTURE AND PROGRAMMING OF EMBEDDED SYSTEMS
MODULE 3 SUSTAINABLE DEVELOPMENT GOALSPPT.pptx
IOP Unit 1.pptx for btech 1st year students
ASPEN PLUS USER GUIDE - PROCESS SIMULATIONS
MACCAFERRY GUIA GAVIONES TERRAPLENES EN ESPAÑOL
Module 1 part 1.pdf engineering notes s7
MODULE 02 - CLOUD COMPUTING-Virtual Machines and Virtualization of Clusters a...
Recent Trends in Network Security - 2025
ANTIOXIDANT AND ANTIMICROBIAL ACTIVITIES OF ALGERIAN POPULUS NIGRA L. BUDS EX...
Cloud Security and Privacy-Module-1.pptx
highway-150803160405-lva1-app6891 (1).pptx

memory management and Virtual Memory.ppt

  • 2. Virtual memory Consider a typical, large application:  There are many components that are mutually exclusive. Example: A unique function selected dependent on user choice.  Error routines and exception handlers are very rarely used.  Most programs exhibit a slowly changing locality of reference. There are two types of locality: spatial and temporal.
  • 3. Characteristics of Paging and Segmentation Memory references are dynamically translated into physical addresses at run time  a process may be swapped in and out of main memory such that it occupies different regions A process may be broken up into pieces (pages or segments) that do not need to be located contiguously in main memory Hence: all pieces of a process do not need to be loaded in main memory during execution  computation may proceed for some time if the next instruction to be fetch (or the next data to be accessed) is in a piece located in main memory
  • 4. Process Execution The OS brings into main memory only a few pieces of the program (including its starting point) Each page/segment table entry has a present bit that is set only if the corresponding piece is in main memory The resident set is the portion of the process that is in main memory An interrupt (memory fault) is generated when the memory reference is on a piece not present in main memory
  • 5. Process Execution (cont.) OS places the process in a Blocking state OS issues a disk I/O Read request to bring into main memory the piece referenced to another process is dispatched to run while the disk I/O takes place an interrupt is issued when the disk I/O completes  this causes the OS to place the affected process in the Ready state
  • 6. Advantages of Partial Loading More processes can be maintained in main memory  only load in some of the pieces of each process  With more processes in main memory, it is more likely that a process will be in the Ready state at any given time A process can now execute even if it is larger than the main memory size  it is even possible to use more bits for logical addresses than the bits needed for addressing the physical memory
  • 7. Virtual Memory: large as you wish!  Ex: 16 bits are needed to address a physical memory of 64KB  lets use a page size of 1KB so that 10 bits are needed for offsets within a page  For the page number part of a logical address we may use a number of bits larger than 6, say 22 (a modest value!!) The memory referenced by a logical address is called virtual memory  is maintained on secondary memory (ex: disk)  pieces are bring into main memory only when needed
  • 8. Virtual Memory (cont.)  For better performance, the file system is often bypassed and virtual memory is stored in a special area of the disk called the swap space  larger blocks are used and file lookups and indirect allocation methods are not used By contrast, physical memory is the memory referenced by a physical address  is located on DRAM The translation from logical address to physical address is done by indexing the appropriate page/segment table with the help of memory management hardware
  • 9. Possibility of trashing To accommodate as many processes as possible, only a few pieces of each process is maintained in main memory But main memory may be full: when the OS brings one piece in, it must swap one piece out The OS must not swap out a piece of a process just before that piece is needed If it does this too often this leads to trashing:  The processor spends most of its time swapping pieces rather than executing user instructions
  • 10. Locality Temporal locality: Addresses that are referenced at some time Ts will be accessed in the near future (Ts + delta_time) with high probability. Example : Execution in a loop. Spatial locality: Items whose addresses are near one another tend to be referenced close together in time. Example: Accessing array elements. How can we exploit this characteristics of programs? Keep only the current locality in the main memory. Need not keep the entire program in the main memory.
  • 11. Locality and Virtual Memory Principle of locality of references: memory references within a process tend to cluster Hence: only a few pieces of a process will be needed over a short period of time Possible to make intelligent guesses about which pieces will be needed in the future This suggests that virtual memory may work efficiently (ie: trashing should not occur too often)
  • 12. Space and Time Storage capacity Access time CPU cache Main memory Secondary Storage Cost/byte Desirable increasing
  • 13. Demand paging Main memory (physical address space) as well as user address space (virtual address space) are logically partitioned into equal chunks known as pages. Main memory pages (sometimes known as frames) and virtual memory pages are of the same size. Virtual address (VA) is viewed as a pair (virtual page number, offset within the page). Example: Consider a virtual space of 16K , with 2K page size and an address 3045. What the virtual page number and offset corresponding to this VA?
  • 14. Virtual Page Number and Offset 3045 / 2048 = 1 3045 % 2048 = 3045 - 2048 = 997 VP# = 1 Offset within page = 997 Page Size is always a power of 2? Why?
  • 15. Page Size Criteria Consider the binary value of address 3045 : 1011 1110 0101 for 16K address space the address will be 14 bits. Rewrite: 00 1011 1110 0101 A 2K address space will have offset range 0 - 2047 (11 bits) Offset within page Page# 001 011 1110 0101
  • 16. Demand paging (contd.) There is only one physical address space but as many virtual address spaces as the number of processes in the system. At any time physical memory may contain pages from many process address space. Pages are brought into the main memory when needed and “rolled out” depending on a page replacement policy. Consider a 8K main (physical) memory and three virtual address spaces of 2K, 3K and 4K each. Page size of 1K. The status of the memory mapping at some time is as shown.
  • 17. Demand Paging (contd.) 0 1 2 3 4 5 6 7 Main memory LAS 0 LAS 1 LAS 2 (Physical Address Space -PAS) LAS - Logical Address Space Executable code space
  • 18. Issues in demand paging How to keep track of which logical page goes where in the main memory? More specifically, what are the data structures needed?  Page table, one per logical address space. How to translate logical address into physical address and when?  Address translation algorithm applied every time a memory reference is needed. How to avoid repeated translations?  After all most programs exhibit good locality. “cache recent translations”
  • 19. Issues in demand paging (contd.) What if main memory is full and your process demands a new page? What is the policy for page replacement? LRU, MRU, FIFO, random? Do we need to roll out every page that goes into main memory? No, only the ones that are modified. How to keep track of this info and such other memory management information? In the page table as special bits.
  • 20. Support Needed for Virtual Memory Memory management hardware must support paging and/or segmentation OS must be able to manage the movement of pages and/or segments between secondary memory and main memory We will first discuss the hardware aspects; then the algorithms used by the OS
  • 21. Paging Each page table entry contains a present bit to indicate whether the page is in main memory or not.  If it is in main memory, the entry contains the frame number of the corresponding page in main memory  If it is not in main memory, the entry may contain the address of that page on disk or the page number may be used to index another table (often in the PCB) to obtain the address of that page on disk Typically, each process has its own page table
  • 22. Paging A modified bit indicates if the page has been altered since it was last loaded into main memory  If no change has been made, the page does not have to be written to the disk when it needs to be swapped out Other control bits may be present if protection is managed at the page level  a read-only/read-write bit  protection level bit: kernel page or user page (more bits are used when the processor supports more than 2 protection levels)
  • 23. Page Table Structure Page tables are variable in length (depends on process size)  then must be in main memory instead of registers A single register holds the starting physical address of the page table of the currently running process
  • 24. Address Translation in a Paging System
  • 25. Sharing Pages If we share the same code among different users, it is sufficient to keep only one copy in main memory Shared code must be reentrant (ie: non self- modifying) so that 2 or more processes can execute the same code If we use paging, each sharing process will have a page table who’s entry points to the same frames: only one copy is in main memory But each user needs to have its own private data pages
  • 26. Sharing Pages: a text editor
  • 27. Translation Lookaside Buffer Because the page table is in main memory, each virtual memory reference causes at least two physical memory accesses  one to fetch the page table entry  one to fetch the data To overcome this problem a special cache is set up for page table entries  called the TLB - Translation Lookaside Buffer  Contains page table entries that have been most recently used  Works similar to main memory cache
  • 28. Translation Lookaside Buffer Given a logical address, the processor examines the TLB If page table entry is present (a hit), the frame number is retrieved and the real (physical) address is formed If page table entry is not found in the TLB (a miss), the page number is used to index the process page table  if present bit is set then the corresponding frame is accessed  if not, a page fault is issued to bring in the referenced page in main memory The TLB is updated to include the new page entry
  • 29. Use of a Translation Lookaside Buffer
  • 30. TLB: further comments TLB use associative mapping hardware to simultaneously interrogates all TLB entries to find a match on page number The TLB must be flushed each time a new process enters the Running state The CPU uses two levels of cache on each virtual memory reference  first the TLB: to convert the logical address to the physical address  once the physical address is formed, the CPU then looks in the cache for the referenced word
  • 31. Page Tables and Virtual Memory Most computer systems support a very large virtual address space  32 to 64 bits are used for logical addresses  If (only) 32 bits are used with 4KB pages, a page table may have 2^{20} entries The entire page table may take up too much main memory. Hence, page tables are often also stored in virtual memory and subjected to paging  When a process is running, part of its page table must be in main memory (including the page table entry of the currently executing page)
  • 32. Inverted Page Table Another solution (PowerPC, IBM Risk 6000) to the problem of maintaining large page tables is to use an Inverted Page Table (IPT) We generally have only one IPT for the whole system There is only one IPT entry per physical frame (rather than one per virtual page)  this reduces a lot the amount of memory needed for page tables The 1st entry of the IPT is for frame #1 ... the nth entry of the IPT is for frame #n and each of these entries contains the virtual page number Thus this table is inverted
  • 33. Inverted Page Table The process ID with the virtual page number could be used to search the IPT to obtain the frame # For better performance, hashing is used to obtain a hash table entry which points to a IPT entry  A page fault occurs if no match is found  chaining is used to manage hashing overflow d = offset within page
  • 34. The Page Size Issue Page size is defined by hardware; always a power of 2 for more efficient logical to physical address translation. But exactly which size to use is a difficult question:  Large page size is good since for a small page size, more pages are required per process  More pages per process means larger page tables. Hence, a large portion of page tables in virtual memory  Small page size is good to minimize internal fragmentation  Large page size is good since disks are designed to efficiently transfer large blocks of data  Larger page sizes means less pages in main memory; this increases the TLB hit ratio
  • 35. The Page Size Issue With a very small page size, each page matches the code that is actually used: faults are low Increased page size causes each page to contain more code that is not used. Page faults rise. Page faults decrease if we can approach point P were the size of a page is equal to the size of the entire process
  • 36. The Page Size Issue Page fault rate is also determined by the number of frames allocated per process Page faults drops to a reasonable value when W frames are allocated Drops to 0 when the number (N) of frames is such that a process is entirely in memory
  • 37. The Page Size Issue Page sizes from 1KB to 4KB are most commonly used But the issue is non trivial. Hence some processors are now supporting multiple page sizes. Ex:  Pentium supports 2 sizes: 4KB or 4MB  R4000 supports 7 sizes: 4KB to 16MB
  • 38. Operating System Software Memory management software depends on whether the hardware supports paging or segmentation or both Pure segmentation systems are rare. Segments are usually paged -- memory management issues are then those of paging We shall thus concentrate on issues associated with paging To achieve good performance we need a low page fault rate
  • 39. Fetch Policy Determines when a page should be brought into main memory. Two common policies:  Demand paging only brings pages into main memory when a reference is made to a location on the page (ie: paging on demand only)  many page faults when process first started but should decrease as more pages are brought in  Prepaging brings in more pages than needed  locality of references suggest that it is more efficient to bring in pages that reside contiguously on the disk  efficiency not definitely established: the extra pages brought in are “often” not referenced
  • 40. Placement policy Determines where in real memory a process piece resides For pure segmentation systems:  first-fit, next fit... are possible choices (a real issue) For paging (and paged segmentation):  the hardware decides where to place the page: the chosen frame location is irrelevant since all memory frames are equivalent (not an issue)
  • 41. Replacement Policy Deals with the selection of a page in main memory to be replaced when a new page is brought in This occurs whenever main memory is full (no free frame available) Occurs often since the OS tries to bring into main memory as many processes as it can to increase the multiprogramming level
  • 42. Replacement Policy Not all pages in main memory can be selected for replacement Some frames are locked (cannot be paged out):  much of the kernel is held on locked frames as well as key control structures and I/O buffers The OS might decide that the set of pages considered for replacement should be:  limited to those of the process that has suffered the page fault  the set of all pages in unlocked frames
  • 43. Replacement Policy The decision for the set of pages to be considered for replacement is related to the resident set management strategy:  how many page frames are to be allocated to each process? We will discuss this later No matter what is the set of pages considered for replacement, the replacement policy deals with algorithms that will choose the page within that set
  • 44. Basic algorithms for the replacement policy The Optimal policy selects for replacement the page for which the time to the next reference is the longest  produces the fewest number of page faults  impossible to implement (need to know the future) but serves as a standard to compare with the other algorithms we shall study:  Least recently used (LRU)  First-in, first-out (FIFO)  Clock
  • 45. The LRU Policy Replaces the page that has not been referenced for the longest time  By the principle of locality, this should be the page least likely to be referenced in the near future  performs nearly as well as the optimal policy Example: A process of 5 pages with an OS that fixes the resident set size to 3
  • 46. Note on counting page faults When the main memory is empty, each new page we bring in is a result of a page fault For the purpose of comparing the different algorithms, we are not counting these initial page faults  because the number of these is the same for all algorithms But, in contrast to what is shown in the figures, these initial references are really producing page faults
  • 47. Implementation of the LRU Policy Each page could be tagged (in the page table entry) with the time at each memory reference. The LRU page is the one with the smallest time value (needs to be searched at each page fault) This would require expensive hardware and a great deal of overhead. Consequently very few computer systems provide sufficient hardware support for true LRU replacement policy Other algorithms are used instead
  • 48. The FIFO Policy Treats page frames allocated to a process as a circular buffer  When the buffer is full, the oldest page is replaced. Hence: first-in, first-out  This is not necessarily the same as the LRU page  A frequently used page is often the oldest, so it will be repeatedly paged out by FIFO  Simple to implement  requires only a pointer that circles through the page frames of the process