0% found this document useful (0 votes)
53 views

Chapter 5 Memory Managment

The document discusses memory management techniques including paging, segmentation, dynamic loading, dynamic linking, overlays, logical versus physical address spaces, and swapping. It describes how processes are loaded into memory from disk, and techniques used to allow processes to be larger than physical memory like overlays which swap parts of a process in and out of memory. It also discusses how virtual addresses are translated to physical addresses using memory management hardware.

Uploaded by

Nasis Dereje
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
53 views

Chapter 5 Memory Managment

The document discusses memory management techniques including paging, segmentation, dynamic loading, dynamic linking, overlays, logical versus physical address spaces, and swapping. It describes how processes are loaded into memory from disk, and techniques used to allow processes to be larger than physical memory like overlays which swap parts of a process in and out of memory. It also discusses how virtual addresses are translated to physical addresses using memory management hardware.

Uploaded by

Nasis Dereje
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 27

Chapter Five

Memory Management

Part One
Main Memory Management

Operating Systems (SEng 2043) WKU


Objective

At the end of this session students will be able to:


Provide a detailed description of various ways of organizing memory
hardware.
Discuss various memory-management techniques, including paging and
segmentation.
Provide a detailed description of the Intel Pentium, which supports both pure
segmentation and segmentation with paging.
Memory Management
Memory is central to the operation of a modern computer system. It is a large array of words or bytes, each with its
own address.
A program resides on a disk as a binary executable file.
 The program must be brought into memory and placed within a process for it to be executed Depending
on the memory management in use, the process may be moved between disk and memory during its
execution.
The
 collection
We can provideofprotection
processesbyonusing
the disk
twothat are waiting
registers, to be
usually a brought into memory for execution forms the
inputand
base queue. i.e. selected
a limit, as shownone of the
in fig process in the input queue and to load that process into memory.
below.
the base register holds the smallest legal physical memory
address;
the limit register specifies the size of the range.
 For example, if the base register holds 300040 and the limit
register is 120900, then the program can legally access all
Contd.
The binding of instructions and data to memory addresses can be done at any
step along the way:
Compile time: If it is known at compile time where the process will reside in
memory, then absolute code can be generated.
Load time: If it is not known at compile time where the process will reside in
memory, then the compiler must generate re-locatable code.
Execution time: If the process can be moved during its execution from one
memory segment to another, then binding must be delayed until run time.
 Dynamic Loading (memory management method): a better memory-space
utilization can be done by dynamic loading. With this method, a routine
is not loaded until it is called
 All routines are kept on disk in a re-locatable load format. The main program is
loaded into memory and is executed.
The advantage of dynamic loading is that an unused routine is never loaded.
Contd.

Dynamic Linking in memory management


Most operating systems support only static linking, in which system language
libraries are treated like any other object module and are combined by the loader into the
binary program image.
The concept of dynamic linking is similar to that of dynamic loading.
Rather than loading being postponed until execution time, linking is postponed.
 This feature is usually used with system libraries, such as language subroutine
libraries. With dynamic linking, a stub is included in the image for each library-
routine reference.
This stub is a small piece of code that indicates how to locate the appropriate
Dynamic Linking cond…
 The entire program and data of a process must be in physical memory for the process to
execute.

 The size of a process is limited to the size of physical memory.


 So that a process can be larger than the amount of memory allocated to it, a
technique called overlays is sometimes used.
 The idea of overlays is to keep in memory only those instructions and data that are
needed at any given time.
 When other instructions are needed, they are loaded into space that was occupied
previously by instructions that are no longer needed, i.e. swapping technique is
used.
Let us see example of overlay in Dynamic linking on next slide :-
Overlay cond …
Example, consider a two-pass assembler.
 During pass 1, it constructs a symbol table; then, During pass 2, it generates machine-
language code.
 We may be able to partition such an assembler into pass 1 code, pass 2 code, the symbol
table 1, and common
Pass1 support routines used by both pass 1 and pass 2.
70KB
Let us consider:- Pass 2
80KB
To load everythingSymbol table
at once, we would require 200KB of memory; If only 150KB is available, we cannot run
20KB
our process. Common routines 30KB
But pass 1 and pass 2 do not need to be in memory at the same time. So we need to have d/t overlays.
Overlay A is the symbol table, common routines, and pass 1, i.e 120KB, and
WeOverlay B is the
add an overlay symbol
driver table,
(10K) andcommon
start withroutines,
overlay and
A inpass 2, i.e. 130KB.
memory.
 When we finish pass 1, we jump to the overlay driver, which reads overlay B into memory, overwriting
overlay A, and then transfers control to pass 2.
 Overlay A needs only 120K, whereas overlay B needs 130K
 As in dynamic loading, overlays do not require any special support from the operating system.
Logical versus Physical Address Space
 An address generated by the CPU is commonly referred to as a logical address,
whereas an address seen by the memory unit is commonly referred to as a physical
address.
 The compile-time and load-time address-binding schemes result in an environment
where the logical and physical addresses are the same.
 The execution-time address-binding scheme results in an environment where the
logical and physical addresses differ.
 In this case, we usually refer to the logical address as a virtual address. The set of
all logical addresses generated by a program is referred to as a logical address
space;

Cond…
 The run-time mapping from virtual to physical addresses is done by the memory
management unit (MMU), which is a hardware device.
 The base register is called a relocation register.
 The value in the relocation register is added to every address generated by a user
process at the time it is sent to memory.
For example, if the base is at 13000, then an attempt by the user to address location 0
dynamically relocated to location 14,000; an access to location 347 is mapped to location
13347.
The MS-DOS operating system running on the Intel 80x86 family of processors uses four
relocation
 registers
The user program when
never seesloading and running
the real physical processes.
addresses, i.e. 347
The user program deals with logical addresses. The memory-mapping hardware converts logical
addresses into physical addressed Logical addresses (in the range 0 to max) and physical addresses (in the
range R + 0 to R + max for a base value R). The user generates only logical addresses.
The concept of a logical address space that is bound to a separate physical address space is central to
proper memory management. b/c user can’t see directly physical address.
Swapping
 A process, can be swapped temporarily out of memory to a backing store, and then
brought back into memory for continued execution.
 Assume a multiprogramming environment with a round robin CPU-scheduling
algorithm.
 When a quantum expires, the memory manager will start to swap out the process that
just finished, and to swap in another process to the memory space that has been freed
( Fig bellow). When each process finishes its quantum, it will beSwapping
swapped requires
with aanother
backing
store. The backing store is
process.
commonly a fast disk.
The context-switch time in
such a swapping system is
fairly high.

Fig Swapping of two processes using a disk as a blocking store


Contiguous Allocation
 The main memory must accommodate both the operating system and the various user
processes.
 The memory is usually divided into two partitions, one for the resident operating system,
and  one for the user processes. Common Operating System is placed in low
memory.
 Single-Partition Allocation: If the operating system is residing in low memory, and the user
processes are executing in high memory. And operating-system code and data are protected
from changes by the user processes.
 We also need protect the user processes from one another, & provide this 2 protection by using a
relocation registers.
 The relocation register contains the value of the smallest physical address; the limit register contains the
range of logical addresses (for example, relocation = 100,040 and limit = 74,600).
Cond.
Multiple-Partition Allocation
 One of the simplest schemes for memory allocation is to divide memory into a number of fixed-
sized partitions.
 Each partition may contain exactly one process.
 Thus, the degree of multiprogramming is bound by the number of partitions.
 When a partition is free, a process is selected from the input queue and is loaded into the free
partition.
 When the process terminates, the partition becomes available for another process.
 Initially, all memory is available for user processes, and is considered as one large
block, of available memory, a hole.
 When a process arrives and needs memory, we search for a hole large enough for this
process.
Cond…
Memory allocation is done using Round-Robin Sequence
This procedure is a particular instance of the general dynamic storage-
allocation problem, which is how to satisfy a request of size n from a list of
free holes.
There are many solutions to this problem.
 The set of holes is searched to determine which hole is best to allocate, first-fit,
best-fit, and worst-fit are the most common strategies used to select a free hole
from the set of available holes.
Cond…
 First-fit: Allocate the first hole that is big enough. Searching can start
either at the beginning of the set of holes or where the previous first-fit
search ended. We can stop searching as soon as we find a free hole that is
large enough.
 Best-fit: Allocate the smallest hole that is big enough. We must search
the entire list, unless the list is kept ordered by size. This strategy-
produces the smallest leftover hole.
 Worst-fit: Allocate the largest hole. Again, we must search the entire list
unless it is sorted by size. This strategy produces the largest leftover hole
External and Internal Fragmentation
As processes are loaded and removed from memory, the free memory
space is broken into little pieces.
External fragmentation exists when enough to the memory space exists to
satisfy a request, but it is not contiguous; storage is fragmented into a
large number of small holes.
Internal fragmentation - memory that is internal to partition, but is not
being used.
Paging :External fragmentation is avoided by using paging
 Paging is a form of dynamic relocation. Every logical address is bound by
the paging hardware to some physical address.
 When we use a paging scheme, we have no external fragmentation: Any
free frame can be allocated to a process that needs it.
 If process size is independent of page size, we can have internal
Segmentation
 A user program can be subdivided using segmentation, in which the program and its
associated data are divided into a number of segments.
 It is not required that all segments of all programs be of the same length, although there is a
maximum segment length.
 In segmentation, a program may occupy more than one partition, and these partitions need not be
contiguous.
 Segmentation eliminates internal fragmentation but, like dynamic partitioning, it suffers from
external fragmentation.
 However, because a process is broken up into a number of smaller pieces, the external
fragmentation should be less.
 Whereas paging is invisible to the programmer, segmentation usually visible and is provided
as a convenience for organizing programs and data.
Summary
Memory management algorithms for multi programmed operating systems range from the
simple single user system approach to paged segmentation. The most important determinant of
the method used in a particular system is the hardware provided, every memory address
generated by the CPU must be checked for legality and possibly mapped to a physical address,
the checking cannot be implemented in software. Hence, we are constrained by the hardware
available.
The various memory management algorithms(continuous allocation, paging, segmentation, and
combinations of paging and segmentation) differ in many aspects. In computing different
memory management strategies, we use hardware support, performance, fragmentation,
relocation, swapping, sharing and protection.
Chapter Five
Memory Management

Part Two: Virtual Memory Management


Objective of this topic is:-
To describe the benefits of a virtual memory
system.
To explain the concepts of demand paging, page-
replacement algorithms, and allocation of page
frames.
To discuss the principle of the working-set model.
Virtual memory (Benefit)
 Virtual memory is a technique that allows the execution of process that may not be completely
in memory.
 The main visible advantage of this scheme is that programs can be larger than
physical memory.
 Virtual memory is the separation of user logical memory from physical memory.
 This separation allows an extremely large virtual memory to be provided for
Virtual memory is commonly
programmers when only a smaller physical memory is available as detailed in Fig
implemented by demand paging.
below. It can also be implemented in a
segmentation system.
Backing store Demand segmentation can also be used
to provide virtual memory.
Fig Diagram showing virtual memory that is larger than physical memory.
Cond.
 Following are the situations, when entire program is not required to load fully.
1. User written error handling routines are used only when an error occurs in the data or
computation.
2. Certain options and features of a program may be used rarely.
3. Many tables are assigned a fixed amount of address space even though only a small
amount of the table is actually used
 The ability to execute a program that is only partially in memory would counter many
benefits:

1. Less number of I/O would be needed to load or swap each user program into memory.

2. A program would no longer be constrained by the amount of physical memory that is


available.
3. Each user program could take less physical memory, more programs could be run the
Demand Paging
Demand paging is similar to paging system with
 When we want to execute a process, we
swapping.

swap it into memory rather than swapping


the entire process into memory as
Advantages of Demand in
Highlighted Paging:
the right Fig.
 Large virtual memory.
 More efficient use of memory.
 Unconstrained multiprogramming i.e. there is no limit on
degree of multiprogramming.
Disadvantages of Demand Paging:
 Number of tables and amount of processor overhead for
handling page interrupts are greater than in the case of the
simple paged management techniques.
 Due to the lack of explicit constraints on jobs address
Page Replacement Algorithm
 Page Replacement Algorithm is a technique used by Operating System to decide which memory
pages to be swap out, write to disk when a page of memory needs to be allocated.
 Paging happens whenever a page fault occurs and a free page cannot be used for allocation purpose
accounting to reason that pages are not available or the number of free pages is lower than required
pages.
 There are many different page replacement algorithms.
 We evaluate an algorithm by running it on a particular string of memory reference called
reference string and computing the number of page faults.
 Reference Strings are generated artificially or by tracing a given system and recording
the address of each memory reference.
 For a given page size we need to consider only the page number, not the entire address.
Example: consider the address sequence
0100, 0432, 0101, 0612, 0102, 0103, 0104, 0101, 0611, 0102, 0103,
0104, 0101, 0610, 0102, 0103, 0104, 0104, 0101, 0609, 0102, 0105 and reduce to
First In First Out (FIFO) algorithm
 FIFO: the Oldest page in main memory is the one which will be selected for replacement.
 Easy to implement, keep a list, replace pages from the tail and add new pages at the head.
Optimal Page algorithm
 An optimal page-replacement algorithm has the lowest page-fault rate of all algorithms.
An optimal page-replacement algorithm exists, and has been called OPT or MIN.
 Replace the page that will not be used for the longest period of time . Use the time when a page is to
be used.
Least Recently Used (LRU) algorithm
 Page which has been used for the longest time in main memory is the one which will be selected for
replacement.
 Easy to implement, keep a list, replace pages by looking back into time.
Page Buffering algorithm (Page Replacement Algorithm)
 To get process start quickly, keep a pool of free frames.
 On page fault, select a page to be replaced.
 Write new page in the frame of free pool, mark the page table and restart the process.
 Now write the dirty page out of disk and place the frame holding replaced page in
free pool.
Least frequently Used (LFU) algorithm (Page Replacement Algorithm)
 Page with the smallest count is the one which will be selected for replacement.
 This algorithm suffers from the situation in which a page is used heavily during the
initial phase of a process, but then is never used again.
Most frequently Used (LFU) algorithm (Page Replacement Algorithm)
 This algorithm is based on the argument that the page with the smallest count was
probably just brought in and has yet to be used.
Summary
It is desirable to be able to execute a process whose logical address space larger than the
available physical address space. Virtual memory is a technique that enables us to map a logical
address space onto a smaller physical memory. Virtual memory allows us to run extremely large
processes and to raise the degree of multiprogramming, increasing CPU utilization. Virtual
memory also enables us to use an efficient type of process creation known as copy-on-write, where in
parent and child processes share actual pages of memory.
Virtual memory is commonly implemented by demand paging. pure demand paging never
brings in a page until that page is referenced. The first reference causes page fault to the
operating system. If total memory requirements exceed the capacity of physical memory, then it
may be necessary to replace pages from memory to free frames for new pages. Various page
replacement algorithm are used. In addition to a page replacement algorithm, a frame allocation
policy is needed. Allocation can be fixed, suggesting local page replacement, or dynamic,
suggesting global replacement.

End of this Chapter

You might also like