Memory Management
1
Memory Management
• Subdividing memory to accommodate
multiple processes
• Memory needs to be allocated to ensure
a reasonable supply of ready processes
to consume available processor time
2
Memory Management
Requirements
• Relocation
– Programmer does not know where the
program will be placed in memory when it
is executed
– While the program is executing, it may be
swapped to disk and returned to main
memory at a different location (relocated)
– Memory references must be translated in
the code to actual physical memory address
3
4
Memory Management
Requirements
• Protection
– Processes should not be able to reference memory
locations in another process without permission
– Impossible to check absolute addresses at compile
time
– Must be checked at run time
– Memory protection requirement must be satisfied
by the processor (hardware) rather than the
operating system (software)
• Operating system cannot anticipate all of the memory
references a program will make
5
Memory Management
Requirements
• Sharing
– Allow several processes to access the same
portion of memory
– Better to allow each process access to the
same copy of the program rather than have
their own separate copy
6
Memory Management
Requirements
• Logical Organization
– Programs are written in modules
– Modules can be written and compiled
independently
– Different degrees of protection given to
modules (read-only, execute-only)
– Share modules among processes
7
Memory Management
Requirements
• Physical Organization
– Memory available for a program plus its
data may be insufficient
• Overlaying allows various modules to be
assigned the same region of memory
– Programmer does not know how much
space will be available
8
Fixed Partitioning
• Equal-size partitions
– Any process whose size is less than or equal
to the partition size can be loaded into an
available partition
– If all partitions are full, the operating
system can swap a process out of a partition
– A program may not fit in a partition. The
programmer must design the program with
overlays
9
Fixed Partitioning
• Main memory use is inefficient. Any
program, no matter how small, occupies
an entire partition. This is called internal
fragmentation.
10
11
Placement Algorithm with
Partitions
• Equal-size partitions
– Because all partitions are of equal size, it
does not matter which partition is used
• Unequal-size partitions
– Can assign each process to the smallest
partition within which it will fit
– Queue for each partition
– Processes are assigned in such a way as to
minimize wasted memory within a partition
12
13
Dynamic Partitioning
• Partitions are of variable length and
number
• Process is allocated exactly as much
memory as required
• Eventually get holes in the memory.
This is called external fragmentation
• Must use compaction to shift processes
so they are contiguous and all free
memory is in one block
14
15
Dynamic Partitioning
Placement Algorithm
• Operating system must decide which
free block to allocate to a process
• Best-fit algorithm
– Chooses the block that is closest in size to
the request
– Worst performer overall
– Since smallest block is found for process,
the smallest amount of fragmentation is left
– Memory compaction must be done more
often
16
Dynamic Partitioning
Placement Algorithm
• First-fit algorithm
– Scans memory form the beginning and
chooses the first available block that is large
enough
– Fastest
– May have many process loaded in the front
end of memory that must be searched over
when trying to find a free block
17
Dynamic Partitioning
Placement Algorithm
• Next-fit
– Scans memory from the location of the last
placement
– More often allocate a block of memory at
the end of memory where the largest block
is found
– The largest block of memory is broken up
into smaller blocks
– Compaction is required to obtain a large
block at the end of memory
18
19
Relocation
• When program loaded into memory the actual
(absolute) memory locations are determined
• A process may occupy different partitions
which means different absolute memory
locations during execution (from swapping)
• Compaction will also cause a program to
occupy a different partition which means
different absolute memory locations
20
Addresses
• Logical
– Reference to a memory location independent of the
current assignment of data to memory
– Translation must be made to the physical address
• Relative
– Address expressed as a location relative to some
known point
• Physical
– The absolute address or actual location in main
memory
21
Paging
• Partition memory into small equal fixed-size
chunks and divide each process into the same
size chunks
• The chunks of a process are called pages and
chunks of memory are called frames
• Operating system maintains a page table for
each process
– Contains the frame location for each page in
the process
– Memory address consist of a page number and
offset within the page
22
Assignment of Process Pages to
Free Frames
23
Assignment of Process Pages to
Free Frames
24
Page Tables for Example
25
Segmentation
• All segments of all programs do not
have to be of the same length
• There is a maximum segment length
• Addressing consist of two parts - a
segment number and an offset
• Since segments are not equal,
segmentation is similar to dynamic
partitioning
26
27
28
Virtual Memory
29
Hardware and Control
Structures
• Memory references are dynamically translated
into physical addresses at run time
– A process may be swapped in and out of main
memory such that it occupies different regions
• A process may be broken up into pieces that
do not need to located contiguously in main
memory
• All pieces of a process do not need to be
loaded in main memory during execution
30
Execution of a Program
• Operating system brings into main
memory a few pieces of the program
• Resident set - portion of process that is
in main memory
• An interrupt is generated when an
address is needed that is not in main
memory
• Operating system places the process in a
blocking state
31
Execution of a Program
• Piece of process that contains the logical
address is brought into main memory
– Operating system issues a disk I/O Read
request
– Another process is dispatched to run while
the disk I/O takes place
– An interrupt is issued when disk I/O
complete which causes the operating system
to place the affected process in the Ready
state
32
Advantages of
Breaking up a Process
• More processes may be maintained in
main memory
– Only load in some of the pieces of each
process
– With so many processes in main memory, it
is very likely a process will be in the Ready
state at any particular time
• A process may be larger than all of main
memory
33
Types of Memory
• Real memory
– Main memory
• Virtual memory
– Memory on disk
– Allows for effective multiprogramming and
relieves the user of tight constraints of main
memory
34
Thrashing
• Swapping out a piece of a process just
before that piece is needed
• The processor spends most of its time
swapping pieces rather than executing
user instructions
35
Principle of Locality
• Program and data references within a
process tend to cluster
• Only a few pieces of a process will be
needed over a short period of time
• Possible to make intelligent guesses
about which pieces will be needed in the
future
• This suggests that virtual memory may
work efficiently
36
Support Needed for
Virtual Memory
• Hardware must support paging and
segmentation
• Operating system must be able to
management the movement of pages
and/or segments between secondary
memory and main memory
37
Paging
• Each process has its own page table
• Each page table entry contains the frame
number of the corresponding page in
main memory
• A bit is needed to indicate whether the
page is in main memory or not
38
Page Tables
• The entire page table may take up too
much main memory
• Page tables are also stored in virtual
memory
• When a process is running, part of its
page table is in main memory
39
Translation Lookaside Buffer
• Each virtual memory reference can
cause two physical memory accesses
– One to fetch the page table
– One to fetch the data
• To overcome this problem a high-speed
cache is set up for page table entries
– Called a Translation Lookaside Buffer
(TLB)
40
Translation Lookaside Buffer
• Contains page table entries that have
been most recently used
41
Translation Lookaside Buffer
• Given a virtual address, processor
examines the TLB
• If page table entry is present (TLB hit),
the frame number is retrieved and the
real address is formed
• If page table entry is not found in the
TLB (TLB miss), the page number is
used to index the process page table
42
Translation Lookaside Buffer
• First checks if page is already in main
memory
– If not in main memory a page fault is issued
• The TLB is updated to include the new
page entry
43
44
Page Size
• Smaller page size, less amount of internal
fragmentation
• Smaller page size, more pages required per
process
• More pages per process means larger page
tables
• Larger page tables means large portion of page
tables in virtual memory
• Secondary memory is designed to efficiently
transfer large blocks of data so a large page
size is better
45
Page Size
• Small page size, large number of pages
will be found in main memory
• As time goes on during execution, the
pages in memory will all contain
portions of the process near recent
references. Page faults low.
• Increased page size causes pages to
contain locations further from any recent
reference. Page faults rise.
46
47
Segmentation
• May be unequal, dynamic size
• Simplifies handling of growing data
structures
• Allows programs to be altered and
recompiled independently
• Lends itself to sharing data among
processes
• Lends itself to protection
48
Segment Tables
• Corresponding segment in main memory
• Each entry contains the length of the
segment
• A bit is needed to determine if segment
is already in main memory
• Another bit is needed to determine if the
segment has been modified since it was
loaded in main memory
49
Combined Paging and
Segmentation
• Paging is transparent to the programmer
• Segmentation is visible to the
programmer
• Each segment is broken into fixed-size
pages
50
Fetch Policy
• Fetch Policy
– Determines when a page should be brought
into memory
– Demand paging only brings pages into main
memory when a reference is made to a
location on the page
• Many page faults when process first started
– Prepaging brings in more pages than needed
• More efficient to bring in pages that reside
contiguously on the disk
51
Placement Policy
• Determines where in real memory a
process piece is to reside
• Important in a segmentation system
• Paging or combined paging with
segmentation hardware performs address
translation
52
Replacement Policy
• Placement Policy
– Which page is replaced?
– Page removed should be the page least
likely to be referenced in the near future
– Most policies predict the future behavior on
the basis of past behavior
53
Replacement Policy
• Frame Locking
– If frame is locked, it may not be replaced
– Kernel of the operating system
– Control structures
– I/O buffers
– Associate a lock bit with each frame
54
Basic Replacement
Algorithms
• Optimal policy
– Selects for replacement that page for which
the time to the next reference is the longest
– Impossible to have perfect knowledge of
future events
55
Basic Replacement
Algorithms
• Least Recently Used (LRU)
– Replaces the page that has not been
referenced for the longest time
– By the principle of locality, this should be
the page least likely to be referenced in the
near future
– Each page could be tagged with the time of
last reference. This would require a great
deal of overhead.
56
Basic Replacement
Algorithms
• First-in, first-out (FIFO)
– Treats page frames allocated to a process as
a circular buffer
– Pages are removed in round-robin style
– Simplest replacement policy to implement
– Page that has been in memory the longest is
replaced
– These pages may be needed again very soon
57
Basic Replacement
Algorithms
• Clock Policy
– Additional bit called a use bit
– When a page is first loaded in memory, the use bit
is set to 1
– When the page is referenced, the use bit is set to 1
– When it is time to replace a page, the first frame
encountered with the use bit set to 0 is replaced.
– During the search for replacement, each use bit set
to 1 is changed to 0
58
59
Comparison of Placement
Algorithms
60
Resident Set Size
• Fixed-allocation
– Gives a process a fixed number of frames
within which to execute
– When a page fault occurs, one of the pages
of that process must be replaced
• Variable-allocation
– Number of frames allocated to a process
varies over the lifetime of the process
61
Fixed Allocation, Local Scope
• Decide ahead of time the amount of
allocation to give a process
• If allocation is too small, there will be a
high page fault rate
• If allocation is too large there will be too
few programs in main memory
62
Variable Allocation,
Global Scope
• Easiest to implement
• Adopted by many operating systems
• Operating system keeps list of free
frames
• Free frame is added to resident set of
process when a page fault occurs
• If no free frame, replaces one from
another process
63
Variable Allocation,
Local Scope
• When new process added, allocate
number of page frames based on
application type, program request, or
other criteria
• When page fault occurs, select page
from among the resident set of the
process that suffers the fault
• Reevaluate allocation from time to time
64
Load Control
• Determines the number of processes that
will be resident in main memory
• Too few processes, many occasions
when all processes will be blocked and
much time will be spent in swapping
• Too many processes will lead to
thrashing
65
Process Suspension
• Lowest priority process
• Faulting process
– This process does not have its working set
in main memory so it will be blocked
anyway
• Last process activated
– This process is least likely to have its
working set resident
66
Process Suspension
• Process with smallest resident set
– This process requires the least future effort
to reload
• Largest process
– Obtains the most free frames
• Process with the largest remaining
execution window
67
THE END
68