Virtual to physical address translation
Virtual memory with paging
Page table per process Page table entry includes present bit frame number modify bit flags for protection and sharing. Page tables can be huge an entry per page of the process large processes -- larger than main memory Page table itself is placed in virtual memory
2
Program execution
For a program to execute, the following must be
present in main memory
the relevant portion of the page table
the starting instructions of the process
the relevant data of the process
The page table descriptor is usually in a processor
register
Has a pointer to the page table
General address translation
CPU virtual address cache MMU
data
Physical address
Global memory
4
Memory management unit
Translates Virtual Addresses page tables translation lookaside buffer (TLB) Page tables One for kernel addresses one or more for user space processes Page Table Entry (PTE) for user space processes Typically 32 bits
page frame, protection, valid, modified, referenced
Address translation
Virtual address: A logical address Virtual page number + offset Finds PTE for virtual page number Extract frame number and adds offset Fail (MMU raises an exception - page fault): bounds error - outside address range validation error - non-resident page protection error - not permitted access
6
Address translation example
virtual page number
20 bits
offset in page
12 bits
Real address
X offset
DRAM Frames Frame X
Virtual address
add
PTE
M R control bits frame number
current page table register
(Process) Page Table
Two-level page table
Example with two-level page table
Consider a process as described here: Logical address space is 4-GB (232 bytes) Size of a page is 4KB (212 bytes) There are ___ pages in the process.
This implies we need ____ page table entries in the process page table.
If each page table entry occupies 4-bytes, then we need a ______ byte
large page table
The page table will occupy __________ pages.
Root table will consist of ____ entries
one for each page that holds part of the process page table
Root table will occupy 212 bytes
4KB of space will be kept in main memory permanently
A page access could require two disk accesses
Example with two-level page table
Consider a process as described here: Logical address space is 4-GB (232) Size of a page is 4KB (212 bytes) There are 220 pages in the process. (232/212)
This implies we need 220 page table entries in the process page
table, one entry per page.
If each page table entry occupies 4-bytes, then we need a
222 byte (4MB) large page table.
The page table will occupy 222/212 i.e. 210 pages.
Root table will consist of 210 entries one for each page that holds a part of the process page table Root table will occupy 212 bytes 4KB of space will be kept in main memory permanently A page access could require two disk accesses
10
Always in main memory
Brought into main memory as needed
11
Inverted page table
The page table can get very large An inverted page table is another solution An inverted page table has an entry for every frame in
main memory and hence is of a fixed size
This is why it is called an inverted page table
A hash function is used to map the page number (and
process number) to the frame number A PTE has
a page number, process id, valid bit, modify bit, chain
pointer
12
Inverted page table
Virtual Address Page # Offset Inverted Page Table Frame # <PID, Page #> Search
V M PID,Page #
Physical Address
Frame #
Offset
Page Frame
matches Page # and PID
Program
Paging Hardware
Memory
13
Inverted page table with hashing
Virtual Address Page # Offset Hash Table Inverted Page Table Page Frame Synonym Chain Search
V M PID,Page #
Physical Address
Frame #
Offset
Hash <PID, Page #>
matches Page # and PID
Program
Paging Hardware
Memory
14
Hashing techniques
Hashing function: X mod 8 (b) Chained rehashing
15
Memory management concerns
Problem:
Page tables require at least 2 memory access per
instruction
One to fetch the page table entry One to fetch the data
Solution: Translation Lookaside Buffer (TLB)
A high-speed HW associative cache set up for page table
entries Cache the address translations themselves
16
TLB details
Associative cache of address translations Entries may contain a tag identifying the process as
well as the virtual address.
Why is this important?
MMU typically manages the TLB
17
More TLB details
Contains page table entries that have been most
recently used
Functions same way as a memory cache
Given a virtual address, processor examines the TLB If present (TLB hit), the frame number is retrieved and the real address is formed
No memory access
If not found (TLB miss), page number is used to index
the process page table
Memory access TLB updated to include new PTE
18
Address translation with TLB
MMU
CPU
Virtual address
TLB Page Table Ptr Page Table
physical address
cache
Page tables
19
Associative cache
Direct page lookup
Associative page lookup
20
Typical TLB use
21
TLB use with memory cache
22
Access time example with TLB
In tlb = 10ns (TLB) + 100ns (data) not in TLB: 10ns (TLB) + 100 (PT) + 100ns (data) Average access time = 110ns (.90) + (1-.90) * 210ns =
120ns
23
Typical TLB parameters
Block Size
Hit Time Miss Penalty TLB Size
4 to 8 bytes (1 page entry)
2.5 to 5 nsec (1 clock cycle) 50 to 150 nsec 32 bytes to 8 KB
Desired Hit Rate 98% to 99.9%
24
HW/SW decisions: page size
Smaller page size less amount of internal fragmentation more pages required per process More pages per process means larger page tables Larger page tables means large portion of page tables in virtual memory Secondary memory is designed to efficiently transfer
large blocks of data so a large page size is better
25
Page size
Smaller page size means large number of pages will be found in main memory As time goes on during execution the pages in memory will all contain portions of the process most recent references Low page faults Increased page size causes pages to contain locations further from any recent reference page faults to rise
26
Page size (2)
27
Page fault rate
28
Page size revisited
Multiple page sizes are supported by many
architectures Multiple page sizes provide the flexibility needed to effectively use a TLB
Large pages can be used for program instructions Small pages can be used for threads
Most operating system support only one page size Makes replacement policy simpler Makes resident set management easier
how many pages per process, etc
29
Virtual memory with segmentation
Advantages of using segmentation Simplifies handling of growing data structures
OS can shrink or enlarge the segment as required
Allows parts of the process to be altered and recompiled
independently
Does not need to recompile the entire process
Lends itself to sharing data among processes
Lends itself to protection
30
Segment tables
Segment table entry present bit (is it in main memory?) starting address (Base address) length of segment modify bit (has the segment since been modified?) protection bit
31
Segment table entries
32
Address translation in segmentation
33
Combined paging and segmentation
Each segment is broken into fixed-size pages Paging is transparent to the programmer eliminates external fragmentation Segmentation is visible to the programmer allows for growing data structures, modularity, and support for sharing and protection
34
35
36