Unit 3 - Memory Management
Unit 3 - Memory Management
UNIT III
MEMORY MANAGEMENT
BACKGROUND
Memory consists of large array of words or bytes, each with its own address.
The CPU fetch fetches instructions from memory according to the value of the
program counter.
Program must be brought (from disk) into memory and placed within a process
for it to be run
Main memory and registers are only storage CPU can access directly
A pair of base and limit registers define the logical address space.
3
ADDRESS BINDING
5
LOGICAL VS. PHYSICAL ADDRESS SPACE
7
DYNAMIC RELOCATION USING A
RELOCATION REGISTER
8
DYNAMIC LOADING
9
DYNAMIC LINKING
Keep in memory only those instructions and data that are needed at any
given time.
Needed when process is larger than amount of memory allocated to it.
Pass 1 70 KB
Pass 2 80 KB
Symbol table 20 KB
Common routines 30 KB
11
OVERLAYS
To load everything we would require 200 KB of memory. If only 150 KB
is available, we cannot run our process.
We define two overlay:
Overlay A Overlay B
Pass 1 Pass 2
Finish pass 1, jump to overlay driver which reads overlay B into memory.
12
Overwriting overlay A then transfer control to pass 2.
OVERLAYS
13
SWAPPING
A process can be swapped temporarily out of memory to a backing store, and then
brought back into memory for continued execution.
Backing store – fast disk large enough to accommodate copies of all memory
images for all users; must provide direct access to these memory images.
Roll out, roll in – swapping variant used for priority-based scheduling algorithms;
lower-priority process is swapped out so higher-priority process can be loaded and
executed.
Modified versions of swapping are found on many systems (i.e., UNIX, Linux,
and Windows).
15
SWAPPING
Major part of swap time is transfer time; total transfer time is directly
proportional to the amount of memory swapped.
The context-switch time in such a swapping system is fairly high.
Let us assume that the user process is of size 1MB and the backing store is a
standard hard disk with transfer rate of 5MB per second.
= 200 milliseconds
Average latency of 8 milliseconds, then swap time takes 208 milliseconds.
16
For swap our and swap in the total swap time is 416 milliseconds.
SWAPPING
= 24.6 seconds
17
CONTIGUOUS MEMORY ALLOCATION
Memory Protection:
18
CONTIGUOUS MEMORY ALLOCATION
Single-partition allocation:
Relocation registers used to protect user processes from each other, and
20
CONTIGUOUS MEMORY ALLOCATION
Memory Allocation:
Multiple-partition allocation:
OS OS OS OS
process 8 process
10
Divide physical memory into fixed-sized blocks called frames. (size is power of 2,
To run a program of size n pages, need to find n free frames and load program.
24
Internal fragmentation
PAGING - BASIC METHOD
Address generated by CPU is divided into:
Page number (p) – used as an index into a page table which contains
base address of each page in physical memory
Page offset (d) – combined with base address to define the physical
memory address that is sent to the memory unit
26
PAGING MODEL OF LOGICAL AND PHYSICAL
MEMORY
27
PAGING EXAMPLE
Page 0: 5*4 =
20
Page 1: 6*4 =
24
Page 2: 1*4 =
4
28
Page 3: 2*4 =
8
FREE FRAMES
29
Before allocation After allocation
HARDWARE SUPPORT
IMPLEMENTATION OF PAGE TABLE
In this scheme every data/instruction access requires two memory accesses. One
for the page table and one for the data/instruction.
The memory access problem can be solved by the use of a special fast-lookup
hardware cache called associative memory or translation look-aside buffers
(TLBs)
Some TLBs store address-space identifiers (ASIDs) in each TLB entry –
uniquely identifies each process to provide address-space protection for that
30
process.
TRANSLATION LOOK-ASIDE BUFFERS
(TLBS)
Each entry in the TLB consists of two parts:
Key
Value
Compared with all key simultaneously. Item is found corresponding value field is
returned.
If the page is not in the TLB(known as TLB miss), a memory reference to the page
must be made.
When the frame is obtained we can use it to access memory.
If TLB is already full, the operating system must select one for replacement.
Replacement policies range from least recently used(LRU) to random.
TLB allow entries to be wired down, they cannot be removed from the TLB. Typically31
kernel code are often wired down.
ASSOCIATIVE MEMORY
Associative memory – parallel search
Page # Frame #
33
EFFECTIVE ACCESS TIME (EAT)
34
EFFECTIVE ACCESS TIME (EAT)
80 percent hit ratio means then we find desired page in TLB 80 percent of the
time. (80/100 = 0.80)
If it takes 20 nanoseconds to search TLB, and 100 nanoseconds to access
memory, then memory access takes 120 nanoseconds when the page is in TLB.
If we fail to find page number in the TLB (search 20 nanoseconds=0.20), then
access memory for page table and frame number (100 nanoseconds) and then
access the desired byte in memory (100 nanoseconds) for total 220 nanoseconds.
Then effective memory access time = 0.8*120+0.20*220
= 140 nanoseconds.
For a 98 percent hit ratio, we have
= 122 nanoseconds.
MEMORY PROTECTION
“valid” indicates that the associated page is in the process’ logical address
space, and is thus a legal page
“invalid” indicates that the page is not in the process’ logical address
space
36
VALID (V) OR INVALID (I) BIT IN A
PAGE TABLE
37
PAGING TO BE CONTINUE…