0% found this document useful (0 votes)
14 views

M4

OS module 4 VTU BCS302

Uploaded by

abhinavhs3124
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views

M4

OS module 4 VTU BCS302

Uploaded by

abhinavhs3124
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 41

Operating Systems (BCS303) III-ISE, MODULE-IV

CHAPTER 8
MEMORY MANAGEMENT STRATEGIES

➢ Background

✓ Memory management is concerned with managing the primary memory.


✓ Memory consists of array of bytes or words each with its own address.
✓ We can ignore how a program generates a memory address. We are interested only in
the sequence of memory addresses generated by the running program.

• Basic Hardware

✓ Main memory and the registers in the processor are the only storage that the
CPU can access directly. Hence the program and data must be brought from disk
into main memory for CPU to access.
✓ Registers can be accessed in one CPU clock cycle. But main memory access can
take many CPU clock cycles.
✓ A fast memory called cache is placed between main memory and CPU registers.
✓ We must ensure correct operation to protect the operating system from access
by user processes and also to protect user processes from one another. This
protection must be provided by the hardware. It can be implemented in several
ways and one such possible implementation is,
o We first need to make sure that each process has a separate memory space.
o To do this, we need the ability to determine the range of legal addresses that
the process may access and to ensure that the process can access only these
legal addresses.
o We can provide this protection by using two registers, a base and a limit, as
illustrated in below figure.

Dr.BA,VS, DS, SS Dept of ISE, RNSIT Page 1


Operating Systems (BCS303) III-ISE, MODULE-IV

o The base register holds the smallest legal physical memory address; the
limit register specifies the size of the range. For example, if the base
register holds 300040 and the limit register is 120900, then the program can
legally access all addresses from 300040 through 420940.
✓ Protection of memory space is accomplished by having the CPU hardware
compare every address generated in user mode with the registers.
✓ Any attempt by a program executing in user mode to access operating-system
memory or other users' memory results in a trap to the operating system, which
treats the attempt as a fatal error as shown in below figure.
✓ This prevents a user program from (accidentally or deliberately) modifying the
code or data structures of either the operating system or other users.
✓ The base and limit registers uses a special privileged instructions which can be
executed only in kernel mode, and since only the operating system executes in
kernel mode, only the operating system can load the base and limit registers.

• Address Binding

✓ Programs are stored on the secondary storage disks as binary executable files.
✓ When the programs are to be executed they are brought in to the main memory
and placed within a process.
✓ The collection of processes on the disk waiting to enter the main memory forms
the input queue.
✓ One of the processes which are to be executed is fetched from the queue and is
loaded into main memory.
✓ During the execution it fetches instruction and data from main memory. After the
process terminates it returns back the memory space.
✓ During execution the process will go through several steps as shown in below
figure. and in each step the address is represented in different ways.
✓ In source program the address is symbolic. The compiler binds the symbolic
address to re-locatable address. The loader will in turn bind this re-locatable
address to absolute address.

Dr.BA,VS, DS, SS Dept of ISE, RNSIT Page 2


Operating Systems (BCS303) III-ISE, MODULE-IV

✓ Binding of instructions and data to memory addresses can be done at any step
along the way:
o Compile time: If we know at compile time where the process resides in
memory, then absolute code can be generated. For example, if we know that
a user process will reside starting at location R, then the generated compiler
code will start at that location and extend up from there. If, at some later time,
the starting location changes, then it will be necessary to recompile this code.
o Load time: If it is not known at compile time where the process will reside in
memory, then the compiler must generate relocatable code. In this case, final
binding is delayed until load time.
o Execution time: If the process is moved during its execution from one
memory segment to another then the binding is delayed until run time. Special
hardware is used for this. Most of the general purpose operating system uses
this method.

• Logical versus physical address

✓ The address generated by the CPU is called logical address or virtual address.
✓ The address seen by the memory unit i.e., the one loaded in to the memory
register is called the physical address.
✓ Compile time and load time address binding methods generate same logical and
physical address. The execution time addressing binding generate different
logical and physical address.
✓ Set of logical address space generated by the programs is the logical address
space. Set of physical address corresponding to these logical addresses is the
physical address space.

Dr.BA,VS, DS, SS Dept of ISE, RNSIT Page 3


Operating Systems (BCS303) III-ISE, MODULE-IV

✓ The mapping of virtual address to physical address during run time is done by the
hardware device called Memory Management Unit (MMU).
✓ The base register is now called re-location register.
✓ Value in the re-location register is added to every address generated by the user
process at the time it is sent to memory as shown in below figure.
✓ For example, if the base is at 14000, then an attempt by the user to address
location 0 is dynamically relocated to location 14000; an access to location 346 is
mapped to location 14346. The user program never sees the real physical
addresses.

• Dynamic Loading
✓ For a process to be executed it should be loaded in to the physical memory. The
size of the process is limited to the size of the physical memory. Dynamic loading
is used to obtain better memory utilization.
✓ In dynamic loading the routine or procedure will not be loaded until it is called.
✓ Whenever a routine is called, the calling routine first checks whether the called
routine is already loaded or not. If it is not loaded it calls the loader to load the
desired program in to the memory and updates the programs address table to
indicate the change and control is passed to newly invoked or called routine.
✓ The advantages are ,
o Gives better memory utilization.
o Unused routine is never loaded.
o Do not need special operating system support.
o Useful when large amount of codes are needed to handle infrequently
occurring cases, such as error routines.

• Dynamic linking and Shared libraries


✓ Some operating system supports only the static linking.

Dr.BA,VS, DS, SS Dept of ISE, RNSIT Page 4


Operating Systems (BCS303) III-ISE, MODULE-IV

✓ In dynamic linking only the main program is loaded in to the memory. If the main
program requests a procedure, the procedure is loaded and the link is established
at the time of references. This linking is postponed until the execution time.
✓ With dynamic linking a “stub” is used in the image of each library referenced
routine. A “stub” is a piece of code which is used to indicate how to locate the
appropriate memory resident library routine or how to load library if the
routine is not already present.
✓ When “stub” is executed it checks whether the routine is present is memory or
not. If not it loads the routine in to the memory.
✓ This feature can be used to update libraries i.e., library is replaced by a new
version and all the programs can make use of this library.
✓ More than one version of the library can be loaded in memory at a time and each
program uses its version of the library. Only the program that is compiled with the
new version is affected by the changes incorporated in it. Other programs linked
before new version was installed will continue using older library. This system is
called “shared libraries”.

➢ Swapping

✓ A process can be swapped temporarily out of the memory to a backing store and
then brought back in to the memory for continuing the execution. This process is
called swapping. Ex. In a multi-programming environment with a round robin CPU
scheduling whenever the time quantum expires then the process that has just finished
is swapped out and a new process swaps in to the memory for execution as shown in
below figure.

✓ A variant of this swapping policy is priority based scheduling. When a low priority is
executing and if a high priority process arrives then a low priority will be swapped
out and high priority is allowed for execution. This process is also called as Roll out
and Roll in.

Dr.BA,VS, DS, SS Dept of ISE, RNSIT Page 5


Operating Systems (BCS303) III-ISE, MODULE-IV

✓ Normally the process which is swapped out will be swapped back to the same
memory space that is occupied previously and this depends upon address binding.
✓ The system maintains a ready queue consisting of all the processes whose memory
images are on the backing store or in memory and are ready to run.
✓ The context-switch time in a swapping system is high. For ex, assume that the user
process is 10 MB in size and the backing store is a standard hard disk with a transfer
rate of 40MB per second. The actual transfer of the 40MB process to or from main
memory takes,
10MB (10000KB)/40MB (40000KB) per second
=1/4 second
=250 milliseconds
✓ Assuming an average latency of 8 milliseconds, the swap time is 258 milliseconds.
Since we must both swap out and swap in, the total swap time is about 516
milliseconds.
✓ Swapping is constrained by other factors,
o To swap a process, it should be completely idle.
o If a process is waiting for an I/O operation, then the process cannot be swapped.

➢ Contiguous Memory Allocation


✓ The main memory must accommodate both the operating system and the various user
processes. One common method to allocate main memory in the most efficient way
is contiguous memory allocation.
✓ The memory is divided into two partitions, one for the resident of operating system
and one for the user processes.

• Memory mapping and protection

✓ Relocation registers are used to protect user processes from each other, and to
protect from changing OS code and data.
✓ The relocation register contains the value of the smallest physical address and the
limit register contains the range of logical addresses.
✓ With relocation and limit registers, each logical address must be less than the limit
register.
✓ The MMU maps the logical address dynamically by adding the value in the
relocation register. This mapped address is sent to main memory as shown in
below figure.
✓ The relocation-register scheme provides an effective way to allow the operating
system's size to change dynamically.

Limit
Register
Relocation
Register

Logical

< Memory
Dr.HN ,CPU
Dr.B Mrs. AMR Dept of ISE, Page 6
Operating Systems (BCS303) III-ISE, MODULE-IV

Address Physical
Yes Address

No
Trap: Addressing error

• Memory Allocation

✓ One of the simplest methods for memory allocation is to divide memory in to


several fixed partition. Each partition contains exactly one process. The degree
of multi-programming depends on the number of partitions.
✓ In multiple partition method, when a partition is free, process is selected from
the input queue and is loaded in to free partition of memory. When process
terminates, the memory partition becomes available for another process.
✓ The OS keeps a table indicating which part of the memory is free and is occupied.
✓ Initially, all memory is available for user processes and is considered one large
block of available memory called a hole.
✓ When a process requests, the OS searches for large hole for this process. If the
hole is too large it is split in to two. One part is allocated to the requesting process
and other is returned to the set of holes.
✓ The set of holes are searched to determine which hole is best to allocate.
✓ Dynamic storage allocation problem is one which concerns about how to satisfy
a request of size n from a list of free holes. There are three strategies/solutions to
select a free hole,
o First fit: Allocates first hole that is big enough. This algorithm scans
memory from the beginning and selects the first available block that is
large enough to hold the process.
o Best fit: It chooses the hole i.e., closest in size to the request. It allocates
the smallest hole i.e., big enough to hold the process.
o Worst fit: It allocates the largest hole to the process request. It searches
for the largest hole in the entire list.
✓ First fit and best fit are the most popular algorithms for dynamic memory
allocation. All these algorithms suffer from fragmentation.

• Fragmentation

✓ External Fragmentation exists when there is enough memory space exists to


satisfy the request, but it is not contiguous. Storage is fragmented into a large
number of small holes.
✓ External Fragmentation may be either minor or a major problem.
✓ Statistical analysis of first fit reveals that, even with some optimization, given N
allocated blocks, another 0.5 N blocks will be lost to fragmentation. That is, one-
third of memory may be unusable. This property is known as the 50-percent rule.
✓ Memory fragmentation can be internal as well as external. Consider a multiple-
partition allocation scheme with a hole of 18,464 bytes. Suppose that the next

Dr.HN, Dr.BA, Mrs. AMR Dept of ISE, RNSIT Page 7


Operating Systems (BCS303) III-ISE, MODULE-IV

process requests 18,462 bytes. If we allocate exactly the requested block, we are
left with a hole of 2 bytes.
✓ The overhead to keep track of this hole will be substantially larger than the hole
itself.
✓ The general approach to avoid this problem is to break the physical memory into
fixed-sized blocks and allocate memory in units based on block size. With this
approach, the memory allocated to a process may be slightly larger than the
requested memory.
✓ The difference between these two numbers is internal fragmentation that is
internal to a partition.
✓ One solution to over-come external fragmentation is compaction. The goal is to
move all the free memory together to form a large block. Compaction is possible
only if the re-location is dynamic and done at execution time.
✓ Another solution to the external fragmentation problem is to permit the logical
address space of a process to be non-contiguous.

➢ Paging

✓ Paging is a memory management scheme that permits the physical address space of
a process to be non-contiguous. Support for paging is handled by hardware.
✓ Paging avoids the considerable problem of fitting the varying sized memory chunks
on to the backing store.

• Basic Method

✓ Physical memory is broken in to fixed sized blocks called frames (f) and Logical
memory is broken in to blocks of same size called pages (p).
✓ When a process is to be executed its pages are loaded in to available frames from
the backing store. The backing store is also divided in to fixed-sized blocks of
same size as memory frames.
✓ The below figure. Shows paging hardware.

Dr.HN, Dr.BA, Mrs. AMR Dept of ISE, RNSIT Page 8


Operating Systems (BCS303) III-ISE, MODULE-IV

✓ Logical address generated by the CPU is divided in to two parts: page number
(p) and page offset (d).
✓ The page number (p) is used as index to the page table. The page table contains
base address of each page in physical memory. This base address is combined
with the page offset to define the physical memory i.e., sent to the memory unit.
The paging model memory is shown in below figure.

✓ The page size is defined by the hardware. The size is the power of 2, varying
between 512 bytes and 16Mb per page.

Dr.HN, Dr.BA, Mrs. AMR Dept of ISE, RNSIT Page 9


Operating Systems (BCS303) III-ISE, MODULE-IV

✓ If the size of logical address space is 2m address unit and page size is 2n, then high
order m-n designates the page number and n low order bits represents page
offset. Thus logic address is as follows.
page number page offset
p d

m-n n

✓ Ex: To show how to map logical memory in to physical memory, consider a page
size of 4 bytes and physical memory of 32 bytes (8 pages) as shown in below
figure.
a. Logical address 0 is page 0 and offset 0 and Page 0 is in frame 5. The logical
address 0 maps to physical address [(5*4) + 0]=20.
b. Logical address 3 is page 0 and offset 3 and Page 0 is in frame 5. The logical
address 3 maps to physical address [(5*4) + 3]= 23.
c. Logical address 4 is page 1 and offset 0 and page 1 is mapped to frame 6. So
logical address 4 maps to physical address [(6*4) + 0]=24.
d. Logical address 13 is page 3 and offset 1 and page 3 is mapped to frame 2.
So logical address 13 maps to physical address [(2*4) + 1]=9.

✓ In paging scheme, we have no external fragmentation. Any free frame can be


allocated to a process that needs it. But we may have some internal
fragmentation.
✓ If the memory requirements of a process do not happen to coincide with page
boundaries, the last frame allocated may not be completely full.

Dr.HN, Dr.BA, Mrs. AMR Dept of ISE, RNSIT Page 10


Operating Systems (BCS303) III-ISE, MODULE-IV

✓ For example, if page size is 2,048 bytes, a process of 72,766 bytes will need 35
pages plus 1,086 bytes. It will be allocated 36 frames, resulting in internal
fragmentation of 2,048 - 1,086= 962 bytes.
✓ When a process arrives in the system to be executed, its size expressed in pages is
examined. Each page of the process needs one frame. Thus, if the process requires
n pages, at least n frames must be available in memory. If n frames are available,
they are allocated to this arriving process.
✓ The first page of the process is loaded in to one of the allocated frames, and the
frame number is put in the page table for this process. The next page is loaded
into another frame and its frame number is put into the page table and so on, as
shown in below figure. (a) before allocation, (b) after allocation.

• Hardware Support

✓ The hardware implementation of the page table can be done in several ways. The
simplest method is that the page table is implemented as a set of dedicated registers.
✓ The use of registers for the page table is satisfactory if the page table is reasonably
small (for example, 256 entries). But most computers, allow the page table to be very
large (for example, 1 million entries) and for these machines, the use of fast registers
to implement the page table is not feasible.
✓ So the page table is kept in the main memory and a page table base register (PTBR)
points to the page table and page table length register (PTLR) indicates size of page
table. Here two memory accesses are needed to access a byte and thus memory access
is slowed by a factor of 2.
✓ The only solution is to use a special, fast lookup hardware cache called Translation
look aside buffer (TLB). TLB is associative, with high speed memory. Each entry in
TLB contains two parts, a key and a value. When an associative register is presented
with an item, it is compared with all the key values, if found the corresponding value
field is returned. Searching is fast but hardware is expensive.

Dr.HN, Dr.BA, Mrs. AMR Dept of ISE, RNSIT Page 11


Operating Systems (BCS303) III-ISE, MODULE-IV

✓ TLB is used with the page table as follows,


o TLB contains only few page table entries.
o When a logical address is generated by the CPU, its page number is presented to
TLB. If the page number is found its frame number is immediately available and
is used to access the actual memory. If the page number is not in the TLB (TLB
miss) the memory reference to the page table must be made.
o When the frame number is obtained we can use it to access the memory as shown
in below figure. The page number and frame number are added to the TLB, so
that they will be found quickly on the next reference.
o If the TLB is full of entries the OS must select anyone for replacement.
o Some TLBs allow entries to be wired down meaning that they cannot be removed
from the TLB.

✓ Some TLBs store Address Space Identifiers (ASIDs) in each TLB entry. An ASID
uniquely identifies each process and is used to provide address-space protection for
that process.
✓ The percentage of time that a page number is found in the TLB is called hit ratio.
✓ For example, an 80-percent hit ratio means that we find the desired page number in
the TLB 80 percent of the time. If it takes 20 nanoseconds to search the TLB and 100
nanoseconds to access memory, then a mapped-memory access takes 120
nanoseconds when the page number is in the TLB. If we fail to find the page
number in the TLB (20 nanoseconds), then we must first access memory for the page
table and frame number (100 nanoseconds) and then access the desired byte in
memory (100 nanoseconds), for a total of 220 nanoseconds. Thus the effective access
time is,
Effective Access Time (EAT) = 0.80 x 120 + 0.20 x 220
= 140 nanoseconds.

Dr.HN, Dr.BA, Mrs. AMR Dept of ISE, RNSIT Page 12


Operating Systems (BCS303) III-ISE, MODULE-IV

In this example, we suffer a 40-percent slowdown in memory-access time (from 100


to 140 nanoseconds).
✓ For a 98-percent hit ratio we have
Effective Access Time (EAT) = 0.98 x 120 + 0.02 x 220
= 122 nanoseconds.
✓ This increased hit rate produces only a 22 percent slowdown in access time.

• Protection

✓ Memory protection in paged environment is done by protection bits that are


associated with each frame. These bits are kept in page table.
✓ One bit can define a page to be read-write or read-only.
✓ One more bit is attached to each entry in the page table, a valid-invalid bit.
✓ A valid bit indicates that associated page is in the process’s logical address space and
thus it is a legal or valid page.
✓ If the bit is invalid, it indicates the page is not in the process’s logical address space
and is illegal. Illegal addresses are trapped by using the valid-invalid bit.
✓ The OS sets this bit for each page to allow or disallow accesses to that page.
✓ For example, in a system with a 14-bit address space (0 to 16383), we have a
program that should use only addresses 0 to 10468. Given a page size of 2 KB, we
have the situation shown in below figure. Addresses in pages 0, 1, 2, 3, 4, and 5 are
mapped normally through the page table. Any attempt to generate an address in pages
6 or 7, we find that the valid -invalid bit is set to invalid, and the computer will trap
to the operating system (invalid page reference).

Dr.HN, Dr.BA, Mrs. AMR Dept of ISE, RNSIT Page 13


Operating Systems (BCS303) III-ISE, MODULE-IV

• Shared Pages

✓ An advantage of paging is the possibility of sharing common code. This


consideration is particularly important in a time-sharing environment.
✓ Consider a system that supports 40 users, each of whom executes a text editor. If the
text editor consists of 150 KB of code and 50 KB of data space, we need 8,000 KB to
support 40 users. If the code is reentrant code (or pure code) it can be shared as
shown in below figure. Here there are three-page editor-each page 50 KB in size and
are being shared among three processes. Each process has its own data page.
✓ Reentrant code is non-self-modifying code (Read only). It never changes during
execution. Thus, two or more processes can execute the same code at the same time.
✓ Each process has its own copy of registers and data storage to hold the data for the
process's execution.
✓ Thus, to support 40 users, we need only one copy of the editor (150 KB), plus 40
copies of the 50 KB of data space per user. The total space required is now 2150 KB
instead of 8,000 KB. (i.e., 150+40*50= 2150 KB).

Dr.HN, Dr.BA, Mrs. AMR Dept of ISE, RNSIT Page 14


Operating Systems (BCS303) III-ISE, MODULE-IV

➢ Structure of the Page Table

• Hierarchical paging

✓ Recent computer system support a large logical address apace from 232 to 264
and thus page table becomes large. So it is very difficult to allocate contiguous
main memory for page table. One simple solution to this problem is to divide
page table in to smaller pieces.
✓ One way is to use two-level paging algorithm in which the page table itself is
also paged as shown in below figure.

✓ Ex. In a 32- bit machine with page size of 4kb, a logical address is divided in
to a page number consisting of 20 bits and a page offset of 12 bit. The page
table is further divided since the page table is paged, the page number is
further divided in to 10 bit page number and a 10 bit offset. So the logical
address is,
Page number page offset
P1 P2 d
10 10 12
✓ 1
P is an index into the outer page table and P2 is the displacement within the
page of the outer page table. The address-translation method for this
architecture is shown in below figure. Because address translation works from
the outer page table inward, this scheme is also known as a forward-mapped
page table.

Dr.HN, Dr.BA, Mrs. AMR Dept of ISE, RNSIT Page 15


Operating Systems (BCS303) III-ISE, MODULE-IV

✓ For a system with a 64-bit logical address space, a two-level paging scheme is
no longer appropriate. Suppose the page size in such a system is 4 KB the
page table consists of up to 252 entries. If we use a two-level paging scheme,
then the inner page tables can be one page long, or contain 210 4-byte entries.
The addresses look like this,
Outer page inner page offset
P1 P2 d
42 10 12

✓ The outer page table consists of 242 entries, or 244 bytes. The one way to avoid
such a large table is to divide the outer page table into smaller pieces.
✓ We can avoid such a large table using three-level paging scheme.

2nd outer page outer page inner page offset


P1 P2 P3 d
32 10 10 12

✓ The outer page table is still 234 bytes in size. The next step would be a four-
level paging scheme.

• Hashed page table

✓ Hashed page table handles the address space larger than 32 bit. The virtual
page number is used as hash value. Linked list is used in the hash table which
contains a list of elements that hash to the same location.
✓ Each element in the hash table contains the following three fields,
o Virtual page number
o Mapped page frame value
o Pointer to the next element in the linked list
✓ The algorithm works as follows,
o Virtual page number is taken from virtual address and is hashed in to the
hash table.

Dr.HN, Dr.BA, Mrs. AMR Dept of ISE, RNSIT Page 16


Operating Systems (BCS303) III-ISE, MODULE-IV

o Virtual page number is compared with field 1 in the first element in the
linked list.
o If there is a match, the corresponding page frame (field 2) is used to form
the desired physical address. If there is no match, subsequent entries in the
linked list are searched for a matching virtual page number. This scheme
is shown in below figure.
o Clustered pages are similar to hash table but one difference is that each
entity in the hash table refer to several pages.

• Inverted Page Tables

✓ Page tables may consume large amount of physical memory just to keep track
of how other physical memory is being used.
✓ To solve this problem, we can use an inverted page table has one entry for
each real page (or frame) of memory. Each entry consists of the virtual
address of the page stored in that real memory location with information about
the process that owns the page.
✓ Thus, only one page table is in the system, and it has only one entry for each
page of physical memory.

✓ The below figure. Shows the operation of an inverted page table.

Dr.HN, Dr.BA, Mrs. AMR Dept of ISE, RNSIT Page 17


Operating Systems (BCS303) III-ISE, MODULE-IV

✓ The inverted page table entry is a pair <process-id, page number>. Where
process-id assumes the role of the address-space identifier. When a
memory reference is made, the part of virtual address consisting of <process-
id, page number> is presented to memory sub-system.
✓ The inverted page table is searched for a match. If a match is found at entry i,
then the physical address <i, offset> is generated. If no match is found then an
illegal address access has been attempted.
✓ This scheme decreases the amount of memory needed to store each page
table, but increases the amount of time needed to search the table when a page
reference occurs.

➢ Segmentation

• Basic method

✓ Users prefer to view memory as a collection of variable-sized segments, with no


ordering among segments as shown in below figure.

Dr.HN, Dr.BA, Mrs. AMR Dept of ISE, RNSIT Page 18


Operating Systems (BCS303) III-ISE, MODULE-IV

✓ Segmentation is a memory-management scheme that supports the user view of


memory.
✓ A logical address is a collection of segments. Each segment has a name and
length. The address specifies both the segment name and the offset within the
segments.
✓ The segments are numbered and are referred by a segment number. So the logical
address consists of <segment number, offset>.

• Hardware

✓ Segment table maps 2-Dimensional user defined address in to 1-Dimensional


physical address.
✓ Each entry in the segment table has a segment base and segment limit.
✓ The segment base contains the starting physical address where the segment
resides and limit specifies the length of the segment.
✓ The use of segment table is shown in the below figure.

Dr.HN, Dr.BA, Mrs. AMR Dept of ISE, RNSIT Page 19


Operating Systems (BCS303) III-ISE, MODULE-IV

✓ Logical address consists of two parts, segment number s and an offset d.


✓ The segment number is used as an index to segment table. The offset must be in
between 0 and limit, if not an error is reported to OS.
✓ If legal the offset is added to the base to generate the actual physical address.
✓ The segment table is an array of base-limit register pairs.

✓ For example, consider the below figure. We have five segments numbered from
0 through 4. Segment 2 is 400 bytes long and begins at location 4300. Thus, a
reference to byte 53 of segment 2 is mapped onto location 4300 +53= 4353. A
reference byte 852 of segment 3, is mapped to 3200 (the base of segment 3) + 852
= 4052. A reference to byte 1222 of segment 0 would result in a trap to the
operating system, as this segment is only 1,000 bytes long.

CHAPTER 9 VIRTUAL MEMORY MANAGEMENT


✓ Virtual memory is a technique that allows the execution of processes that are not
completely in memory.
✓ The main advantage of this scheme is that programs can be larger than physical memory.
✓ Virtual memory also allows processes to share files easily and to implement shared
memory. It also provides an efficient mechanism for process creation.
✓ But virtual memory is not easy to implement.

➢ Background

Dr.HN, Dr.BA, Mrs. AMR Dept of ISE, RNSIT Page 20


Operating Systems (BCS303) III-ISE, MODULE-IV

✓ Anexamination of real programs shows us that, in many cases, the entire program is
not needed to be in physical memory to get executed.
✓ Even in those cases where the entire program is needed, it may not need all to be at
the same time.
✓ The ability to execute a program that is only partially in memory wouldconfer many
benefits,
o A program will not be limited by the amount of physical memory that is
available.
o More than one program can run at the same time which can increase the
throughput and CPU utilization.
o Less I/O operation is needed to swap or load user program in to memory. So
each user program could run faster.
✓ Virtual memory involves the separation of user’s logical memory from physical
memory. This separation allows an extremely large virtual memory to be provided
for programmers when there is small physical memory as shown in below figure.

✓ The virtual address space of a process refers to the logical (or virtual) viewof how a
process is stored in memory.
✓ In below figure.We allow heap to grow upward in memoryas it is used for dynamic
memory allocation andstack togrow downward in memory through successive
function calls.
✓ The large blankspace (or hole) between the heap and the stack is part of the virtual
addressspace but will require actual physical pages only if the heap or stack grows.
✓ Virtual address spaces that include holes are known as sparse address spaces.

Dr.HN, Dr.BA, Mrs. AMR Dept of ISE, RNSIT Page 21


Operating Systems (BCS303) III-ISE, MODULE-IV

✓ Virtualmemory allows files and memory to be shared by two or more


processesthrough page sharing. This leads to the following benefits,
o System libraries can be shared by several processes through mappingof the
shared object into a virtual address space as shown in below figure.
o Virtual memory allows one process to create a regionof memory that it can
share with another processas shown in below figure.
o Virtual memory can allow pages to be shared during process creation withthe
fork() system call thus speeding up process creation.

➢ Demand Paging

✓ Virtual memory is implemented using Demand Paging.


✓ A demand paging is similar to paging system with swapping as shown in below
figure.Where the processes reside in secondary memory.
✓ When we want to execute a process we swap it in to memory.Rather thanswapping
the entire process into memory we use a lazy swapper which never swaps a page
into memory unless that page will be needed.
✓ A swapper manipulates entire process, whereas a pager is concerned with the
individual pages of a process. We thus use page rrather than swapper in connection
with demand paging.

Dr.HN, Dr.BA, Mrs. AMR Dept of ISE, RNSIT Page 22


Operating Systems (BCS303) III-ISE, MODULE-IV

• Basic concepts

✓ We need some form of hardware support to distinguish between the pages that are
in memory and the pages that are on the disk.
✓ The valid-invalid bit scheme can provide this .If the bit is valid then the page is both
legal and is in memory.If the bit is invalid then either the page is not valid or is valid
but is currently on the disk.
✓ The page-table entry for a page that is brought into memory is set as valid but the
page-table entry for a page that is not currently in memory is either simply marked
invalid or contains the address of the page on diskas shown in below figure.
✓ Access to the page which is marked as invalid causes a page fault trap.

Dr.HN, Dr.BA, Mrs. AMR Dept of ISE, RNSIT Page 23


Operating Systems (BCS303) III-ISE, MODULE-IV

✓ The step for handling page fault is straight forward and is shown in below figure.

1. We check the internal table of the process to determine whether the reference
made is valid or invalid.

Dr.HN, Dr.BA, Mrs. AMR Dept of ISE, RNSIT Page 24


Operating Systems (BCS303) III-ISE, MODULE-IV

2. If invalid, terminate the process. If valid, then the page is not yet loaded and we
now page it in.
3. We find a free frame.
4. We schedule disk operation to read the desired page in to newly allocated frame.
5. When disk read is complete, we modify the internal table kept with the process to
indicate that the page is now in memory.
6. We restart the instruction which was interrupted by the trap. The process can now
access the page.
✓ In extreme cases, we can start executing the process without pages in memory. When
the OS sets the instruction pointer of process which is not in memory, it generates a
page fault. After this page is brought in to memory the process continues to execute,
faulting every time until every page that it needs is in memory. This scheme is known
as pure demand paging. That is, it never brings the page in to memory until it is
required.
✓ The hardware support for demand paging is same as paging and swapping.
o Page table: It has the ability to mark an entry invalid through valid-invalid bit.
o Secondary memory: This holds the pages that are not present in main memory. It
is a high speed disk.It is known as the swap device, and the section of disk used
for this purpose isknown as swap space.

• Performance of demand paging

✓ Demand paging can have significant effect on the performance of the computer
system.
✓ Let us compute the effective access time for a demand-paged memory.
✓ The memory-access time, denoted ma, ranges from 10 to 200 nanoseconds. As long
as we have no page faults, the effective access time is equal to the memory access
time.
✓ If a page fault occurs, we must first read the relevant page from disk and then access
the desired word.
✓ Letpbe the probability of a page fault (0<=p<=1).The effective access timeis then,

Effective access time= (1 - p) *ma + p * page fault time.

✓ To compute the effective access time, we must know how much time isneeded to
service a page fault. A page fault causes the following sequence tooccur,

1. Trap to the OS.


2. Save the user registers and process state.
3. Determine that the interrupt was a page fault.
4. Check that the page reference was legal and determine the location of the page on
disk.
5. Issue a read from disk to a free frame.
a. Wait in a queue for this device until the read request is serviced.
b. Wait for the device seek and/or latency time.
c. Begin the transfer of the page to a free frame.

Dr.HN, Dr.BA, Mrs. AMR Dept of ISE, RNSIT Page 25


Operating Systems (BCS303) III-ISE, MODULE-IV

6. While waiting, allocate the CPU to some other user.


7. Receive an interrupt from the disk I/O subsystem.
8. Save the registers and process statefor the other user.
9. Determine that the interrupt was from the disk.
10. Correct the page table and other table to show that the desired page is now in
memory.
11. Wait for the CPU to be allocated to this process again.
12. Restore the user registers,process state and new page table, then resume the
interrupted instruction.

✓ The three major components of the page-fault service time,


1. Service the page-fault interrupts.
2. Read in the page.
3. Restart the process.

✓ With an average page-fault service time of 8 milliseconds and a memoryaccesstime


of 200 nanoseconds, the effective access time in nanoseconds is
effective access time= (1 - p) * (200) + p (8 milliseconds)
= (1 - p) * 200 + p * 8,000,000
= 200 + 7,999,800 *p.
✓ The effective access time is directly proportional to thepage-fault rate.
✓ If one access out of 1,000 causes a page fault, the effectiveaccess time is 8.2
microseconds. The computer will be slowed down by a factor of 40 because of
demand paging. If we want performance degradation to beless than 10 percent, we
need,
220>200 + 7,999,800 *p,
20>7,999,800 *p,
p < 0.0000025.

➢ Copy-on-write

✓ Copy-on-write technique allows both the parent and the child processes to share the
same pages. These pages are marked as copy-on-write pages i.e., if either process
writes to a shared page, a copy of shared page is created.
✓ Copy-on-write is illustrated in below figure 1 and figure 2, which shows the contents
of the physical memory before and after process 1 modifies page C.

Dr.HN, Dr.BA, Mrs. AMR Dept of ISE, RNSIT Page 26


Operating Systems (BCS303) III-ISE, MODULE-IV

figure-1

figure-2

✓ For Ex: If a child process tries to modify a page containing portions of the stack, the
OS recognizes them as a copy-on-write page and create a copy of this page and maps
it on to the address space of the child process. So the child process will modify its
copied page and not the page belonging to parent.
✓ The new pages are obtained from the pool of free pages.Operating systems allocate
these pages using a technique known aszero-fill-on-demand. Zero-fill-on-demand
pages have been zeroed-out beforebeing allocated, thus erasing the previous
contents.

➢ Page Replacement

✓ If the total memory requirement exceeds physical memory,Page replacement policy


deals withreplacing (removing)pages frommemory to free frames for bringing inthe
new pages.
✓ While a user process is executing a page fault occurs.The operating system
determines where the desired page is residing on the disk, and this finds that there are
no free frames on the free frame list as shown in below figure.

Dr.HN, Dr.BA, Mrs. AMR Dept of ISE, RNSIT Page 27


Operating Systems (BCS303) III-ISE, MODULE-IV

✓ The OS has several optionslike, it could terminatethe user processor instead swap
out a process, freeing all its frames and thus reduce the level of multiprogramming.

• Basic Page Replacement

✓ If frame is not free, we find one that is not currently being used and free it. We
can free a frame by writing its contents to swap space and changing the page table
to indicate that the page is no longer in memory as shown in below figure. We
can now usethe freed frame to hold the page for which the process faulted.
Thepage-fault service routineis modifiedto include page replacement,
1. Find the location of derived page on the disk.
2. Find a free frame
a. If there is a free frame, use it.
b. Otherwise, use a replacement algorithm to select a victim.
c. Write the victim page to the disk; change the page and frame tables
accordingly.
3. Read the desired page into the free frame and change the page and frame
tables.
4. Restart the user process.

Dr.HN, Dr.BA, Mrs. AMR Dept of ISE, RNSIT Page 28


Operating Systems (BCS303) III-ISE, MODULE-IV

✓ The page that is supported out of physical memory is called victim page.
✓ If no frames are free, the two page transfers (one out and one in) are required.
This will doubles the page-fault service time and increases the effective access
time.
✓ This overhead can be reduced by usingmodify (dirty) bit. Each page or frame
may have modify (dirty) bit associated with it. The modify bit for a page is set by
the hardware whenever any word or byte in the page is written into, indicating
that the page has been modified.
✓ When we select the page for replacement, we check the modify bit. If the bit is
set, then the page is modified since it was read from the disk.
✓ If the bit was not set, the page has not been modified since it was read into
memory. Therefore, we can avoid writing the memory page to the disk as it is
already there.
✓ We must solve two major problems to implement demand paging i.e., we must
develop a frame allocation algorithm and a page replacement algorithm. If we
have multiple processes in memory, we must decide how many frames to allocate
to each processand when page replacement is needed. We must select the frames
that are to be replaced.
✓ There are manydifferent page-replacement algorithms.We want the one with the
lowestpage-fault rate.
✓ An algorithm is evaluatedby running it on a particular string of
memoryreferences called a reference stringand computing the number of page
faults.

• FIFO page replacement algorithm

✓ This is the simplest page replacement algorithm. A FIFO replacement algorithm


associates the time of each page when that page was brought into memory.
✓ When a page is to be replaced the oldest one is selected.

Dr.HN, Dr.BA, Mrs. AMR Dept of ISE, RNSIT Page 29


Operating Systems (BCS303) III-ISE, MODULE-IV

✓ We replace the queue at the head of the queue. When a page is brought into
memory, we insert it at the tail of the queue.
✓ For example,consider the following reference string with 3 frames initially
empty.

✓ The first three references (7, 0, 1) causes page faults and are brought into the
empty frames.
✓ The next reference 2 replaces page 7 because the page 7 was brought in first.
✓ Since 0 is the next reference and 0 is already in memory we have no page faultfor
this reference.
✓ The next reference 3 replaces page 0 so but the next reference to 0 causer page
fault. Page 1 is then replaced by page 0.
✓ This will continue till the end of stringas shown in below figure. and there are 15
faults all together.

✓ For some page replacement algorithm, the page fault may increase as the
number of allocated frames increases. This is called as Belady’s Anamoly. FIFO
replacement algorithm may face this problem.

• Optimal page replacement algorithm

✓ Optimal page replacement algorithm is mainly to solve the problem of Belady’s


Anamoly.
✓ Optimal page replacement algorithm has the lowest page fault rate of all
algorithms.
✓ An optimal page replacement algorithm exists and has been called OPT.
The working is simple “Replace the page that will not be used for the longest
period of time”
✓ Example: consider the following reference string

Dr.HN, Dr.BA, Mrs. AMR Dept of ISE, RNSIT Page 30


Operating Systems (BCS303) III-ISE, MODULE-IV

✓ The first three references cause faults that fill the three empty frames.
✓ The references to page 2 replaces page 7, because 7 will not be used until
reference 18.
✓ The page 0 will be used at 5 and page 1 at 14.
✓ With only 9 page faults, optimal replacement is much better than a FIFO, which
had 15 faults.
✓ This algorithm is difficult t implement because it requires future knowledge of
reference strings.

• Least Recently Used (LRU) page replacement algorithm

✓ If the optimal algorithm is not feasible, an approximation to the optimal algorithm


is possible.
✓ The main difference b/w OPTS and FIFO is that;
✓ FIFO algorithm uses the time when the pages was built in and OPT uses the time
when a page is to be used.
✓ The LRU algorithm replaces the pages that have not been used for longest period
of time.
✓ The LRU associated its pages with the time of that pages last use.
✓ This strategy is the optimal page replacement algorithm looking backward in time
rather than forward.
✓ Ex: consider the following reference string

✓ The first 5 faults are similar to optimal replacement.

Dr.HN, Dr.BA, Mrs. AMR Dept of ISE, RNSIT Page 31


Operating Systems (BCS303) III-ISE, MODULE-IV

✓ When reference to page 4 occurs, LRU sees that of the three frames, page 2 as
used least recently. The most recently used page is page 0 and just before page 3
was used.
✓ The LRU policy is often used as a page replacement algorithm and considered to
be good. The main problem to how to implement LRU is the LRU requires
additional hardware assistance.
✓ Two implementation are possible:

o Counters: In this we associate each page table entry a time -of -use field,
and add to the cpu a logical clock or counter. The clock is incremented for
each memory reference.When a reference to a page is made, the contents of
the clock register are copied to the time-of-use field in the page table entry for
that page.In this way we have the time of last reference to each page we
replace the page with smallest time value. The time must also be maintained
when page tables are changed.

o Stack: Another approach to implement LRU replacement is to keep a stack


of page numbers when a page is referenced it is removed from the stack and
put on to the top of stack. In this way the top of stack is always the most
recently used page and the bottom in least recently used page. Since the
entries are removed from the stack it is best implement by a doubly linked list.
With a head and tail pointer.Neither optimal replacement nor LRU
replacement suffers from Belady’s Anamoly. These are called stack
algorithms.

• LRU page replacement algorithm

✓ An LRU page replacement algorithm should update the page removal status
information after every page reference updating is done by software, cost
increases.
✓ But hardware LRU mechanism tend to degrade execution performance at the
same time, then substantially increases the cost. For this reason, simple and
efficient algorithm that approximation the LRU have been developed. With
hardware support the reference bit was used. A reference bit associate with each
memory block and this bit automatically set to 1 by the hardware whenever the
page is referenced. The single reference bit per clock can be used to approximate
LRU removal.

Dr.HN, Dr.BA, Mrs. AMR Dept of ISE, RNSIT Page 32


Operating Systems (BCS303) III-ISE, MODULE-IV

✓ The page removal software periodically resets the reference bit to 0, write the
execution of the users job causes some reference bit to be set to 1.
✓ If the reference bit is 0 then the page has not been referenced since the last time
the reference bit was set to 0.

▪ Additional-Reference-Bits Algorithm

✓ We can gain additional ordering information by recording the reference bits at


✓ regular intervals. We can keep an 8-bit byte for each page in a table in
memory.
✓ At regular intervals (say, every 100 milliseconds), a timer interrupt transfers
✓ control to the operating system. The operating system shifts the reference bit
✓ for each page into the high-order bit of its 8-bit byte, shifting the other bits
rightby 1 bit and discarding the low-order bit. These 8-bit shift registers
contain thehistory of page use for the last eight time periods. If the shift
register contains00000000,.
✓ For example, then the page has not been used for eight time periods;a page
that is used at least once in each period has a shift register value of11111111.
A page with a history register value of 11000100 has been used morerecently
than one with a value of 01110111.
✓ If we interpret these 8-bit bytesas unsigned integers, the page with the lowest
number is the LRU page, andit can be replaced. Notice that the numbers are
not guaranteed to be unique,however. We can either replace (swap out) all
pages with the smallest value oruse the FIFO method to choose among them.
✓ The number of bits of history included in the shift register can be varied,of
course, and is selected (depending on the hardware available) to makethe
updating as fast as possible. In the extreme case, the number can bereduced to
zero, leaving only the reference bit itself. This algorithm is calledtheSecond-
Chance Algorithm

▪ Second-Chance Algorithm

✓ The basic algorithm of second-chance replacement is a FIFO replacement


✓ algorithm. When a page has been selected, however, we inspect its reference
✓ bit. If the value is 0, we proceed to replace this page; but if the reference bit
✓ is set to 1, we give the page a second chance and move on to select the next
✓ nextvictimreference pages reference pagesbits circular queue of pages circular
queue of pages.

Dr.HN, Dr.BA, Mrs. AMR Dept of ISE, RNSIT Page 33


Operating Systems (BCS303) III-ISE, MODULE-IV

• Count Based Page Replacement

✓ There is many other algorithms that can be used for page replacement, we can
keep a counter of the number of references that has made to a page.

o LFU (least frequently used)

✓ This causes the page with the smallest count to be replaced. The reason for
this selection is that actively used page should have a large reference
count.
✓ This algorithm suffers from the situation in which a page is used heavily
during the initial phase of a process but never used again. Since it was
used heavily, it has a large count and remains in memory even though it is
no longer needed.
o Most Frequently Used(MFU)

✓ This is based on the principle that the page with the smallest count was
probably just brought in and has yet to be used.

• Allocation of Frames

Dr.HN, Dr.BA, Mrs. AMR Dept of ISE, RNSIT Page 34


Operating Systems (BCS303) III-ISE, MODULE-IV

✓ The allocation policy in a virtual memory controls the operating system decision
regarding the amount of real memory to be allocated to each active process.
✓ In a paging system if more real pages are allocated, it reduces the page fault
frequency and improved turnaround throughput.
✓ If too few pages are allocated to a process its page fault frequency and turnaround
times may deteriorate to unacceptable levels.
✓ The minimum number of frames per process is defined by the architecture, and
the maximum number of frames. This scheme is called equal allocation.
✓ With multiple processes competing for frames, we can classify page replacement
into two broad categories
o Local Replacement: requires that each process selects frames from only its
own sets of allocated frame.
o Global Replacement: allows a process to select frame from the set of all
frames. Even if the frame is currently allocated to some other process, one
process can take a frame from another.
✓ In local replacement the number of frames allocated to a process do not change
but with global replacement number of frames allocated to a process do not
change global replacement results in greater system throughput.

Other consideration
There is much other consideration for the selection of a replacement algorithm and allocation
policy.

1) Preparing: This is an attempt to present high level of initial paging. This strategy is to
bring into memory all the pages at one time.
2) TLB Reach: The TLB reach refers to the amount of memory accessible from the TLB
and is simply the no of entries multiplied by page size.
3) Page Size: following parameters are considered
a) page size us always power of 2 (from 512 to 16k)
b) Internal fragmentation is reduced by a small page size.
c) A large page size reduces the number of pages needed.
4) Invented Page table: This will reduces the amount of primary memory i,e. needed to
track virtual to physical address translations.
5) Program Structure: Careful selection of data structure can increases the locality and
hence lowers the page fault rate and the number of pages in working state.
6) Real time Processing: Real time system almost never has virtual memory. Virtual
memory is the antithesis of real time computing, because it can introduce unexpected
long term delay in the execution of a process.

➢ Thrashing
➔ If the number of frames allocated to a low-priority process falls below the minimum
number required by the computer architecture then we suspend the process execution.
➔ A process is thrashing if it is spending more time in paging than executing.

Dr.HN, Dr.BA, Mrs. AMR Dept of ISE, RNSIT Page 35


Operating Systems (BCS303) III-ISE, MODULE-IV

➔ If the processes do not have enough number of frames, it will quickly page fault.
During this it must replace some page that is not currently in use. Consequently it
quickly faults again and again. The process continues to fault, replacing pages for
which it then faults and brings back. This high paging activity is called thrashing. The
phenomenon of excessively moving pages back and forth b/w memory and secondary
has been called thrashing.

Cause of Thrashing
• Thrashing results in severe performance problem.
• The operating system monitors the cpu utilization is low. We increase the degree
of multi programming by introducing new process to the system.
• A global page replacement algorithm replaces pages with no regards to the
process to which they belong.
The figure shows the thrashing
➔ As the degree of multi programming increases, more slowly until a
maximum is reached. If the degree of multi programming is increased
further thrashing sets in and the cpu utilization drops sharply.

➔ At this point, to increases CPU utilization and stop thrashing, we must


increase degree of multi programming. We can limit the effect of
thrashing by using a local replacement algorithm. To prevent thrashing,
we must provide a process as many frames as it needs.

Locality of Reference:
• As the process executes it moves from locality to locality.
• A locality is a set of pages that are actively used.
• A program may consist of several different localities, which may overlap.
• Locality is caused by loops in code that find to reference arrays and other data structures
by indices.
The ordered list of page number accessed by a program is called reference string.
Locality is of two types
1) spatial locality
2) temporal locality

Working set model


Working set model algorithm uses the current memory requirements to determine the number of
page frames to allocate to the process, an informal definition is

Dr.HN, Dr.BA, Mrs. AMR Dept of ISE, RNSIT Page 36


Operating Systems (BCS303) III-ISE, MODULE-IV

“the collection of pages that a process is working with and which must be resident if the process
to avoid thrashing”.
The idea is to use the recent needs of a process to predict its future reader.
The working set is an approximation of programs locality.
Ex: given a sequence of memory reference, if the working set window size to memory
references, then working set at time t1 is {1,2,5,6,7} and at t2 is changed to {3,4}
• At any given time, all pages referenced by a process in its last 4 seconds of execution are
considered to compromise its working set.
• A process will never execute until its working set is resident in main memory.
• Pages outside the working set can be discarded at any movement.
Working sets are not enough and we must also introduce balance set.
a) If the sum of the working sets of all the run able process is greater than the size of
memory the refuse some process for a while.
b) Divide the run able process into two groups, active and inactive. The collection of active
set is called the balance set. When a process is made active its working set is loaded.
c) Some algorithm must be provided for moving process into and out of the balance set.
As a working set is changed, corresponding change is made to the balance set. Working set
presents thrashing by keeping the degree of multi programming as high as possible. Thus if
optimizes the CPU utilization. The main disadvantage of this is keeping track of the working
set.

Solved Exercises (VTU QP problems)


1. June 2013, Jan 2015

Dr.HN, Dr.BA, Mrs. AMR Dept of ISE, RNSIT Page 37


Operating Systems (BCS303) III-ISE, MODULE-IV

1a. Solution:

First Fit Best Fit Worst Fit

100K 100K 100K


212K 500K 417K 500K 417K
500K 112K 200K 112K 200K
200K 300K 212K 300K 212K
300K 600K 426K 600K 112K
600K 417K
426 must wait 426 must wait

The Best Fit is the efficient algorithm.

2. For the following page reference calculate the page faults that occur using FIFO and LRU for
3 and 4 page frames respectively, 5,4,2,1,4,3,5,4,3,2,1,5. (10 marks) Jan 2015

Solution:
3 Frames (FIFO) ---------10 pagefaults

3 3 3 4 4 4 2 2
4 4 4 1 1 1 5 5 5
5 5 5 2 2 2 3 3 3 1

3 Frames (FIFO) ------- 11 page faults

2 2 2 2 3 3 3
3 3 3 3 4 4 4 4
4 4 4 4 5 5 5 5 1
5 5 5 5 1 1 1 1 2 2

Dr.HN, Dr.BA, Mrs. AMR Dept of ISE, RNSIT Page 38


Operating Systems (BCS303) III-ISE, MODULE-IV

3
5
1
2

3 Frames (LRU)--------- 11 page faults

3 3 3 4 4 4 4 1
4 4 4 1 1 1 5 2 2
5 5 5 2 2 2 3 3 3 3

1
2
5

4 Frames (LRU)-------- 9 page faults

2 2 5 5 1 1
3 3 3 3 3 3 3
4 4 4 4 4 4 4 5
5 5 5 5 1 1 2 2 2

Question Bank
1. Write a note on virtual memory.

2. Explain demand paging.


3. Write a note on thrashing
4. What is Belady’s anomaly? Describe the working set model.
5. Explain the need of page replacement algorithms.
6. What is frame allocation policy?

7. List the objectives of file management.

8. Explain the file system structure.

9. How is file system implemented?

10. Write short notes on partitions and mounting.

11. Explain free space management.

Dr.HN, Dr.BA, Mrs. AMR Dept of ISE, RNSIT Page 39


Operating Systems (BCS303) III-ISE, MODULE-IV

12. Explain bit vector, linked list, grouping and counting methods.

13. What are the different ways to implement the directory?

14. Explain contiguous allocation method.

VTU Question Paper Questions


1 What do you mean by fragmentation? Explain difference between internal and external
fragmentation. (6) Dec 07/Jan 08

2. For page reference string : 1,2,3,4,2,1,5,6,2,1,2,3,7,6,3,2,1,2,3,6, how many page faults would
occur for LRU and optimal alg. Assuming 2 and 6 frames. (10) Dec 07/Jan 08

3. What is the cause of thrashing? How does system detect thrashing? (4) Dec 07/Jan 08, Dec
08/Jan 09, Dec 09/ Jan 10, June 2011.

4. Differentiate between internal and external fragmentation. How are they overcome? (4)

Dec 08/Jan 09.

5. With diagram, discuss steps involved in handling a page fault. (6) Dec 09/ Jan 10

6. Mention the problem with simple paging scheme. How TLB is used to solve this problem?
Explain with supporting h/w dig with example. (Dec 2012)

7. Prove that Belady’s anomaly exists for the following reference string 1,2,3,4,1,2,5,1,2,3,4,5
using FIFO algorithm when numbers of frames used are 3 and 4. (8 marks) Jan 2015

8. Draw and explain the multistep processing of a user program. (8 marks)Jan 2015

9. What is locality of reference? Differentiate between paging and segmentation. (5 marks)

10. What is a file? Describe different access methods on files. (4) Dec 08/Jan 09/Dec 2012

11. What is file mounting? Explain. (4) Dec 08/Jan 09

12. Draw neat diagram and explain fixed file allocation. Is FAT linked allocation? (9) Dec08/Jan
09

13. Explain following: file types, file operations, file attributes (12) Dec 09/ Jan 10

14. Explain methods to implement directories (8) Dec 09/ Jan 10/Jun 11

15. What is free space list? With example, explain any two methods to implement free space list.
(8) Dec2010

16. What are major methods to allocate disk space? Explain each with examples. (12) Dec 2010

Dr.HN, Dr.BA, Mrs. AMR Dept of ISE, RNSIT Page 40


Operating Systems (BCS303) III-ISE, MODULE-IV

17. Explain different file access methods. (5) June 2011

18. Explain various directory structures (7) June 2011/Dec 2012

19. Explain different disk space allocation methods with example. (8) June 2011, 2014

20. Explain the various storage mechanisms available to store files, with neat diagram.(8)
Jan 2015.

21. What is file? Explain in detail different allocation methods.(8) June/July 2016

22. What are directories? List different type’s directory structures with example. Mention their
advantages and disadvantages.(8) June/July 2016

23. Explain how free space is managed.(4) June/July 2016

24. Explainbriefly various operations performed on files.(6) June/July 2017

25 Explain the various access methods of file.(6) June/July 2017

26. Explain varies allocation methods in implementing file systems.(8)June/July 2017

27. List any five typical file attributes and any five file operations indicating their purpose in one
line each. (10)Dec2016/Jan2017

28. Briefly explain the methods of keeping track of free space on disks.(10)Dec2016/Jan2017

29. What is the principle behind paging? Explain its operation, clearly indicating how the

logic Addresses are converted to physical addresses. (10)Dec2016/Jan2017

30. A hypothetical main memory can store only 3 frames simultaneously. The sequence which
the pages will be required is given below:

7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1 (Twenty operations).

Indicate the sequence in which the three frames will be filled in i) FIFO II) Optimal page
replacement and iii) Least recently used methods of page replacement. Indicate number of
page faults in each case. (10)Dec2016/Jan2017

Dr.HN, Dr.BA, Mrs. AMR Dept of ISE, RNSIT Page 41

You might also like