OS Modulewise answers
OS Modulewise answers
1) Define opera ng Systems. Explain the dual-mode opera ng system with a neat
diagram.
ANS: Opera ng system:
A program that acts as an intermediary between a user of a computer and the
computer hardware.
An opera ng System is a collec on of system programs that together control the
opera ons of a computer system.
Some examples of opera ng systems are Windows, linus, MacOS, Ubuntu.
The Dual-Mode taken by most computer systems is to provide hardware support that
allows us to differen ate among various modes of execu on.
At the very least, we need two separate modes of opera on: user mode and kernel
mode
A bit, called the mode bit is added to the hardware of the computer to indicate the
current mode: kernel (0) or user (1).
With the mode bit, we are able to dis nguish between a task that is executed on
behalf of the opera ng system and one that is executed on behalf of the user.
When the computer system is execu ng on behalf of a user applica on, the system is
in user mode.
However, when a user applica on requests a service from the opera ng system (via a
system call), it must transi on from user to kernel mode to fulfill the request.
This is shown in Figure below. As we shall see, this architectural enhancement is
useful for many other aspects of system opera on as well.
At system boot me, the hardware starts in kernel mode. The opera ng system is
then loaded and starts user applica ons in user mode.
Whenever a trap or interrupt occurs, the hardware switches from user mode to
kernel mode . Thus, whenever the opera ng system gains control of the computer, it
is in kernel mode.
The system always switches to user mode (by se ng the mode bit to 1) before
passing control to a user program.
The dual mode of opera on provides us with the means for protec ng the opera ng
system from errant users-and errant users from one another.
2) Define system program? Explain mul programming and me sharing system.
Mul programming is the fast switching of the CPU between several programs. A program is
generally made up of several tasks. A task ends with some request to move data which
would require some I/O opera ons to be executed. Mul tasking is commonly used to keep
the CPU busy while the currently running program is doing I/O opera ons. Compared to
other execu ng instruc ons, I/O opera ons are extremely slow.
Even if a program contains a very small number of I/O opera ons, most of the me taken for
the program is spent on those I/O opera ons. Therefore, using this idle me and allowing
another program to u lize the CPU will increase the CPU u liza on. Mul programming was
ini ally developed in the late 1950s as a feature of opera ng systems and was first used in
mainframe compu ng. With the introduc on of virtual memory and virtual machine
technologies, the use of mul programming was enhanced. It has no fixed me slice for
processes. Its main purpose is resource u liza on.
o No CPU idle me
o Mul programming system can monitor fastest as en re tasks run in parallel.
o Shorter response me
o Maximizes total job throughput of a computer
o Increases resource u liza on
Disadvantages of Mul programming OS
Time-sharing is a technique that enables many people located at various terminals to use a
par cular computer system simultaneously. Time-Sharing is the logical extension of
mul programming. In this me-sharing Opera ng system, many processes are allocated
with computer resources in respec ve me slots. In this, the processor's me is shared with
mul ple users. That's why it is called a me-sharing opera ng system. It has a fixed me
slice for the different processes. Its main purpose is interac ve response me.
The CPU executes mul ple jobs by switching between them, but the switches occur so
frequently. Thus, the user can receive an immediate response. The opera ng system uses
CPU scheduling and mul programming to provide each user with a small amount of me.
Computer systems that were designed primarily as batch systems have been modified to
me-sharing systems.
The main difference between Mul programmed Batch Systems, and Time-Sharing Systems
is that in mul programmed batch systems, the objec ve is to maximize processor use. In
contrast, in Time-Sharing Systems, the objec ve is to minimize response me.
Advantages of Time-Sharing OS
o Improves response me
Disadvantages of Time-Sharing OS
o An issue with the security and integrity of user programs and data
3) Dis nguish between the following terms. (i) Mul programming and Mul tasking (ii)
Mul processor System and Clustered System.
Defini on Running mul ple programs Allowing mul ple tasks (processes or
concurrently by switching between threads) to execute seemingly
them to maximize CPU u liza on. simultaneously by rapidly switching.
Defini on A single system with mul ple CPUs A group of independent computers
(processors) sharing memory and (nodes) working together to perform
other resources. a task.
Architecture Processors are ghtly coupled and Systems are loosely coupled,
share memory, I/O, and clock. connected via a network.
Fault Lower fault tolerance since a Higher fault tolerance; failure in one
failure in shared components node may not affect the en re
Tolerance
affects the whole system. cluster.
ANS:
Defini on: All processors share the same memory and have equal access to I/O devices.
Each processor runs tasks independently, but they communicate through shared
memory.
Features:
Advantages:
o High reliability; if one processor fails, others can con nue working.
Disadvantages:
Defini on: A master-slave configura on where one processor (master) controls the
system, and other processors (slaves) handle assigned tasks.
Features:
Advantages:
Disadvantages:
Example: Embedded systems or systems with a dedicated processor for graphics or I/O.
Types of Clustering
Purpose: To ensure con nuous availability of services by providing failover capabili es.
Features:
Applica ons: Online banking, stock trading pla orms, or web servers.
2. Load-Balancing Clustering:
Features:
Applica ons: Web servers handling high traffic or e-commerce pla orms.
Features:
o Nodes work together to perform a single task, spli ng the computa on.
Example: Supercomputers like IBM’s Blue Gene or clusters using MPI (Message Passing
Interface).
4. Storage Clustering:
Features:
Applica ons: Data centers, cloud storage, and big data analy cs.
5) Differen ate client server compu ng and peer to peer compu ng.
ANS:
Scalability Limited by the server's capacity; Highly scalable; more peers can
adding more clients may degrade improve resource availability and
performance. sharing.
Reliability Server failure can disrupt the en re More fault-tolerant; failure of one
system unless failover mechanisms or a few peers doesn't disrupt the
are in place. system.
Use Cases Suitable for applica ons requiring Suitable for sharing files,
centralized control, such as banking distributed compu ng, and
systems, or enterprise so ware. decentralized networks.
ANS:
Opera ng systems can be explored from two viewpoints: the user and the system.
User View: The user's view of the computer varies according to the interface being used.
Most computer users sit in front of a PC, consis ng of a monitor, keyboard, mouse, and
system unit. Such a system is designed for one user to monopolize its resources. The goal is
to maximize the work (or play) that the user is performing. In this case, the opera ng system
is designed mostly for ease of use, with some a en on paid to performance and none paid
to resource u liza on-how various hardware and so ware resources are shared.
Performance is, of course, important to the user; but rather than resource u liza on, such
systems are op mized for the single-user experience.
System View: From the computer's point of view, the opera on system is the program most
in mately involved with the hardware. In this context, we can view an opera ng system as a
resource allocator. A computer system has many resources that may be 1. required to solve
a problem: CPU me, memory space, file-storage space, I/O devices, and so on. The
opera ng system acts as the manager of these resources.
A control program manages the execu on of user programs to prevent errors and improper
use of the computer. It is especially concerned with the opera on and control of I/O
devices.
Following are the six services provided by opera ng systems to the convenience of the
users.
1. User interface: Almost all opera ng systems have a user interface (UI). This interface can
take several forms. One is a command-line interface (CLI) and other is a graphical user
interface (GUI) is used.
2. Program Execu on: The purpose of computer systems is to allow the user to 1. execute
programs. So the opera ng system provides an environment where the user can
conveniently run programs.
3. I/O Opera ons: Each program requires an input and produces output. This involves the
use of I/O. So the opera ng systems are providing I/O makes it convenient for the users to
run programs.
4. File System Manipula on: The output of a program may need to be wri en into new files
or input taken from some files. The opera ng system provides this service. Finally, some
programs include permissions management to allow or deny access to files or directories
based on file ownership.
5. Communica ons: The processes need to communicate with each other to exchange
informa on during execu on. It may be between processes running on the same computer
or running on the different computers. Communica ons can be occur in two ways: (i) shared
memory or (ii) message passing
6. Error Detec on: An error is one part of the system may cause malfunc oning of the
complete system. To avoid such a situa on opera ng system constantly monitors the system
for detec ng the errors. This relieves the user of the worry of errors propaga ng to various
part of the system and causing malfunc oning.
Following are the three services provided by opera ng systems for ensuring the efficient
opera on of the system itself.
ANS :
A virtual machine takes the layered approach to its logical conclusion. It treats
hardware and the opera ng system kernel as though they were all hardware.
A virtual machine provides an interface iden cal to the underlying bare hardware.
The opera ng system creates the illusion of mul ple processes, each execu ng on
its own processor with its own (virtual) memory.
The resources of the physical computer are shared to create the virtual machines. ✦
CPU scheduling can create the appearance that users have their own processor. ✦
Spooling and a file system can provide virtual card readers and virtual line printers.
✦ A normal user me-sharing terminal serves as the virtual machine operator’s
console.
ANS:
System calls provide an interface between the process and the opera ng system.
System calls allow user-level processes to request some services from the opera ng
system which process itself is not allowed to do.
For example, for I/O a process involves a system call telling the opera ng system to
read or write par cular area and this request is sa sfied by the opera ng system.
System calls can be grouped roughly into five major categories: process control, file
manipula on, device manipula on, informa on maintenance, and communica ons.
Process control: A running program needs to be able to halt its execu on either
normally (end) or abnormally (abort). If a system call is made to terminate the
currently running program abnormally, or if the program runs into a problem and
causes an error trap, a dump of memory is some mes taken and an error message
generated.
File management: We first need to be able to create and delete files. Either system
call requires the name of the file and perhaps some of the file's a ributes. Once the
file is created, we need to open it and to use it. We may also read, write, or
reposi on (rewinding or skipping to the end of the file, for example). Finally, we
need to close the file, indica ng that we are no longer using it.
Informa on Maintenance: Many system calls exist simply for the purpose of
transferring informa on between the user program and the opera ng system. For
example, most systems have a system call to return the current me and date. Other
system calls may return informa on about the system, such as the number of
current users, the version number of the opera ng system, the amount of free
memory or disk space, and so on.
Communica on: There are two common models of interprocess communica on: the
message passing model and the shared-memory model. In the message-passing
model, the communica ng processes exchange messages with one another to
transfer informa on. Messages can be exchanged between the processes either
directly or indirectly through a common mailbox.
MODULE 2
1) What is process. Explain the different states of a process with a state diagram and
control block.
ANS: PROCESS:
Process State :
As a process executes, it changes state. The state of a process is defined in part by the current
ac vity of that process.
Process Control Block (PCB) :Each process is represented in the opera ng system by a
process control block (PCB)-also called a task control block. A PCB is shown in Figure below.
It contains many pieces of informa on associated with a specific process, including these:
Process state
Program counter
CPU registers
CPU scheduling informa on
Memory-management informa on
Accoun ng informa on
I/O status informa on
Process state: The state may be new, ready, running, wai ng, halted, and SO on.
Program counter: The counter indicates the address of the next instruc on to be
executed for this process.
CPU registers: The registers vary in number and type, depending on the computer
architecture. They include accumulators, index registers, stack pointers, and general
purpose registers, plus any condi on-code informa on.
Accoun ng informa on: This informa on includes the amount of CPU and real me
used, me limits, account numbers, job or process numbers, and so on.
Status informa on: The informa on includes the list of I/O devices allocated to this
process, a list of open files, and so on.
2) What is inter-process communica on? Discuss message passing and the shared memory
concept of IPC.
Answer:
Shared-Memory Systems
Interprocess communica on using shared memory requires communica ng
processes to establish a region of shared memory.
Typically, a shared-memory region resides in the address space of the process
crea ng the shared-memory segment.
Other processes that wish to communicate using this shared-memory segment must
a ach it to their address space.
Shared memory requires that two or more processes agree to remove this restric on.
They can then exchange informa on by reading and wri ng data in the shared areas.
Two types of buffers can be used. The unbounded buffer places no prac cal limit on
the size of the buffer.
The consumer may have to wait for new items, but the producer can always produce
new items.
The bounded buffer assumes a fixed buffer size.
In this case, the consumer must wait if the buffer is empty, and the producer must
wait if the buffer is full.
Message-Passing Systems
Message passing provides a mechanism to allow processes to communicate and to
synchronize their ac ons without sharing the same address space and is par cularly
useful in a distributed environment, where the communica ng processes may reside
on different computers connected by a network.
A message-passing facility provides at least two opera ons: send (message) and
receive (message).
Messages sent by a process can be of either fixed or variable size. If only fixed-sized
messages can be sent, the system-level implementa on is straigh orward.
This restric on, however, makes the task of programming more difficult. If processes
P and Q want to communicate, they must send messages to and receive messages
from each other; a communica on link must exist between them.
Here are several methods for logically implemen ng a link and the send() / receive()
opera ons:
• Direct or indirect communica on
• Synchronous or asynchronous communica on
• Automa c or explicit buffering
3) What is Mul Threaded process? Explain four benefits of mul threaded programming.
ANS:
Mul threading is a crucial concept in modern compu ng that allows mul ple threads to
execute concurrently, enabling more efficient u liza on of system resources.
1. Responsiveness
Mul threading in an interac ve applica on may allow a program to con nue running even if a
part of it is blocked or is performing a lengthy opera on, thereby increasing responsiveness to
the user. In a non mul threaded environment, a server listens to the port for some request and
when the request comes, it processes the request and then resume listening to another
request.
2. Resource Sharing
Processes may share resources only through techniques such as- Such techniques must be
explicitly organized by programmer. However, threads share the memory and the resources of
the process to which they belong by default. The benefit of sharing code and data is that it
allows an applica on to have several threads of ac vity within same address space.
Message Passing
Shared Memory
3. Economy
Alloca ng memory and resources for process crea on is a costly job in terms of me and space.
Since, threads share memory with the process it belongs, it is more economical to create and
context switch threads. Generally much more me is consumed in crea ng and managing
processes than in threads. In Solaris, for example, crea ng process is 30 mes slower than
crea ng threads and context switching is 5 mes slower.
4. Scalability
The benefits of mul -programming greatly increase in case of mul processor architecture,
where threads may be running parallel on mul ple processors. If there is only one thread then it
is not possible to divide the processes into smaller tasks that different processors can perform.
Single threaded process can run only on one processor regardless of how many processors are
available. Mul -threading on a mul ple CPU machine increases parallelism.
5. Be er Communica on System
To improve the inter-process communica on, thread synchroniza on func ons can be used.
Also, when need to share huge amounts of data across mul ple threads of execu on inside the
same address space then provides extremely high bandwidth and low communica on across
the various tasks within the applica on.
Every thread could be execute in parallel on a dis nct processor which might be considerably
amplified in a microprocessor architecture. Mul threading enhances concurrency on a mul
CPU machine. Also the CPU switches among threads very quickly in a single processor
architecture where it creates the illusion of parallelism, but at a par cular me only one thread
can running.
Threads have a minimal influence on the system’s resources. The overhead of crea ng,
maintaining, and managing threads is lower than a general process.
8. Enhanced Concurrency
Mul threading can enhance the concurrency of a mul -CPU machine. This is because the
mul threading allows every thread to be executed in parallel on a dis nct processor.
The threads minimize the context switching me as in Thread Context Switching, the virtual
memory space remains the same.
4) Discuss in detail the mul threading model, its advantages and disadvantages with suitable
illustra on.
ANS:
Many-to-One Model:
The many-to-one model (Figure below) maps many user-level threads to one kernel thread.
Thread management is done by the thread library in user space, so it is efficient; but the en re
process will block if a thread makes a blocking system call.
Also, because only one thread can access the kernel at a me, mul ple threads are unable to run
in parallel on mul processors. Green threads-a thread library available for Solaris-uses this
model, as does GNU Portable Threads.
One-to-One Model:
The one-to-one model (Figure below) maps each user thread to a kernel thread. It provides more
concurrency than the many-to-one model by allowing another thread to run when a thread
makes a blocking system call; it also allows mul ple threads to run in parallel on mul processors.
The only drawback to this model is that crea ng a user thread requires crea ng the
corresponding kernel thread.
Because the overhead of crea ng kernel threads can burden the performance of an applica on,
most implementa ons of this model restrict the number of threads supported by the system.
Linux, along with the family of Windows opera ng systems including Windows 95, 98, NT, 2000,
and XP implement the one-to-one model.
Many-to-Many Model:
The many-to-many model (Figure below) mul plexes many user-level threads to a smaller or
equal number of kernel threads. The number of kernel threads may be specific to either a
par cular applica on or a par cular machine (an applica on may be allocated more kernel
threads on a mul processor than on a uniprocessor).
One popular varia on on the many-to-many model s ll mul plexes many user level threads to a
smaller or equal number of kernel threads but also allows a user-level thread to be bound to a
kernel thread. This varia on, some mes referred to as the two level model (Figure below), is
supported by opera ng systems such as IRIX, HP-UX, and Tru64 UNIX.
5) Explain five different scheduling criteria used in the compu ng scheduling mechanism.
Answer::
• CPU u liza on.: We want to keep the CPU as busy as possible. Conceptually, CPU u liza on
can range from 0 to 100 percent.
• Throughput:. If the CPU is busy execu ng processes, then work is being done. One measure of
work is the number of processes that are completed per me unit, called throughput.
• Turnaround me:. From the point of view of a par cular process, the important criterion is
how long it takes to execute that process. The interval from the me of submission of a process
to the me of comple on is the turnaround me.
• Wai ng me:. The CPU scheduling algorithm does not affect the amount of me during which
a process executes or does I/O; it affects only the amount of me that a process spends wai ng
in the ready queue.
• Response me:. In an interac ve system, turnaround me may not be the best criterion. O en,
a process can produce some output fairly early and can con nue compu ng new results while
previous results are being output to the user. Thus, another measure is the me from the
submission of a request un l the first response is produced. This measure, called response me,
is the me it takes to start responding, not the me it takes to output the response. The
turnaround me is generally limited by the speed of the output device.
To enter the cri cal sec on, process Pi first sets flag [i] to be true and then sets turn to
the value j, thereby asser ng that if the other process wishes to enter the cri cal sec on
it can do so. If both processes try to enter at the same me, turn will be set to both i and
j at roughly the same me. Only one of these assignments will last; the other will occur,
but will be overwri en immediately.
To prove property Mutual exclusion is preserved, we note that each Pi enters its cri cal
sec on only if either flag [j] == false or turn == i.
Also note that, if both processes can be execu ng in their cri cal sec ons at the same
me, then flag [i] ==flag [j] == true.
These two observa ons imply that P0 and P1 could not have successfully executed their
while statements at about the same me, since the value of turn can be either 0 or 1,
but cannot be both.
2) Illustrate how readers and writers problem can be solved using semaphores.
• Readers – only read the data set; they do not perform anyupdates
• Problem – allow mul ple readers to read at the same me. Only one single writer can access
the shared data at the same me.
• SharedData
• Dataset
• Semaphore wr ni alized to 1.
Ans:
Semaphore is Synchroniza on tool is used to solve various synchroniza on problem and
can be implemented effieciently.
It do not require Busywai ng.
It has 2 opera on:Write() and Signal():
Semaphore have 2 types: 1. Binary semaphores 2. Coun ng semaphores
Dining-Philosophers Problem
Consider five philosophers who spend their lives thinking and ea ng. The philosophers
share a circular table surrounded by five chairs, each belonging to one philosopher. In the
center of the table is a bowl of rice, and the table is laid with five singlechops cks.
A philosopher gets hungry and tries to pick up the two chops cks that are closest to her (the
chops cks that are between her and her le and right neighbors). A philosopher may pick
up only one chops ck at a me. When a hungry philosopher has both her chops cks at the
same me, she eats without releasing the chops cks. When she is finished ea ng, she puts
down both chops cks and starts thinkingagain. It is a simple representa on of the need to
allocate several resources among several processes in a deadlock-free and starva on-
freemanner.
Solu on:
One simple solu on is to represent each chops ck with a semaphore. A philosopher tries to
grab a chops ck by execu ng a wait() opera on on thatsemaphore. She releases her
chops cks by execu ng the signal() opera on on the appropriate semaphores. Thus, the
shared data are
where all the elements of chops ck are ini alized to 1. The structure of philosopher iis
shown
Several possible remedies to the deadlock problem are replaced by
Ans:
1. Deadlock Preven on
This method avoids the condi ons that lead to deadlocks by systema cally ensuring that
at least one of the necessary condi ons for deadlock cannot occur.
2. Hold and Wait: A process holding resources can request addi onal resources.
4. Circular Wait: A set of processes form a cycle, each wai ng for a resource held by the
next process in the cycle.
Advantages:
Disadvantages:
2. Deadlock Avoidance
This method ensures the system avoids unsafe states that could lead to deadlocks by
dynamically analyzing resource alloca on.
Banker's Algorithm:
Assesses whether gran ng a resource request will keep the system in a safe state.
Safe State: A state where the system can allocate resources to processes in some order
without leading to deadlock.
Unsafe State: A state that could poten ally lead to deadlock.
1. When a process requests a resource, check if fulfilling the request will leave the system
in a safe state.
Advantages:
Disadvantages:
ANS:
1. Process Termina on
This method involves termina ng processes to break the circular wait condi on and free
resources. There are two approaches to process termina on:
Terminate one process at a me and recheck for deadlock a er each termina on.
2. Resource Preemp on
This method involves forcibly reclaiming resources from processes to break the
deadlock.
2. Rollback: Roll back the process to a safe state, undoing its recent opera ons.
3. Reallocate Resources: Assign the preempted resources to other processes.
3. Process Rollback
A er rollback, the process can restart from the checkpoint without holding the
resources it previously used.
Advantages:
Disadvantages:
The Wait-for Graph (a directed graph showing which processes are wai ng for resources
held by other processes) is analyzed to iden fy the deadlock cycle.
o Termina ng a process.
o Preemp ng resources.
5. Manual Interven on
Ans:
Paging is a memory management technique used by opera ng systems to divide the process's
memory and the physical memory into fixed-size blocks. These blocks are called pages (in
logical memory) and frames (in physical memory). Paging helps manage memory efficiently
and eliminates external fragmenta on by alloca ng memory in fixed-size units.
When a program accesses memory, the opera ng system translates the logical address into a
physical address using the page table.
Internal Fragmenta on
Defini on: Internal fragmenta on occurs when allocated memory has unused space within a
page or block because the requested memory is smaller than the block size.
Example:
The second page will have 2 KB of unused space, resul ng in internal fragmenta on.
Illustra on:
Page 1 4 KB 4 KB 0 KB
Page 2 2 KB 4 KB 2 KB
External Fragmenta on
Defini on: External fragmenta on occurs when free memory is available, but it is fragmented
into non-con guous blocks, making it unusable for larger memory requests.
Example (Without Paging):
Assume there is 12 KB of free memory split into blocks of 4 KB, 2 KB, and 6 KB.
Even though there is enough total memory (12 KB), the request cannot be sa sfied
because the memory is fragmented into smaller blocks.
Illustra on:
Block 1 4 KB Yes
Block 2 2 KB Yes
Block 3 6 KB Yes
2) What is Transla on Lookaside buffer(TLB)? Explain TLB in detail with a paging system with
a neat diagram.
Ans:
• A special, small, fast lookup hardware cache, called a transla on look-aside buffer (TLB).
• Each entry in the TLB consists of two parts: a key (or tag) and a value.
• When the associa ve memory is presented with an item, the item is compared with all keys
simultaneously. If the item is found, the corresponding value field is returned. The search is
fast; the hardware, however, is expensive. Typically, the number of entries in a TLB is small,
o en numbering between 64 and 1,024.
Working:
• When a logical-address is generated by the CPU, its page-number is presented to the TLB.
• If the page-number is found (TLB hit), its frame-number is immediately available and used to
access memory
• If page-number is not in TLB (TLB miss), a memory-reference to page table must be made.
The obtained frame-number can be used to access memory (Figure 1)
• In addi on, we add the page-number and frame-number to the TLB, so that they will be
found quickly on the next reference.
• If the TLB is already full of entries, the OS must select one for replacement.
• Percentage of mes that a par cular page-number is found in the TLB is called hit ra o.
Advantage: Search opera on is fast.
• Some TLBs store ASID (address-space iden fier) in each entry of the TLB that uniquely
iden fy each process and provide address space protec on for that process.
ANS:
Some of the common techniques that are used for structuring the Page table are as follows:
1. Hierarchical Paging
Hierarchical Paging
Another name for Hierarchical Paging is mul level paging.
There might be a case where the page table is too big to fit in a con guous space, so we
may have a hierarchy with several levels.
In this type of Paging the logical address space is broke up into Mul ple page tables.
Hierarchical Paging is one of the simplest techniques and for this purpose, a two-level
page table and three-level page table can be used.
Consider a system having 32-bit logical address space and a page size of 1 KB and it is further
divided into:
Page Number consis ng of 22 bits.
As we page the Page table, the page number is further divided into :
P2 indicates the displacement within the page of the Inner page Table.
As address transla on works from outer page table inward so is known as forward-mapped
Page Table.
Below given figure below shows the Address Transla on scheme for a two-level page table
For a system with 64-bit logical address space, a two-level paging scheme is not appropriate. Let
us suppose that the page size, in this case, is 4KB.If in this case, we will use the two-page level
scheme then the addresses will look like this:
Thus in order to avoid such a large table, there is a solu on and that is to divide the outer page
table, and then it will result in a Three-level page table:
Hashed Page Tables:
This approach is used to handle address spaces that are larger than 32 bits.
This Page table mainly contains a chain of elements hashing to the same elements.
Given below figure shows the address transla on scheme of the Hashed Page Table:
The Virtual Page numbers are compared in this chain searching for a match; if the match is
found then the corresponding physical frame is extracted.
There is one entry for each virtual page number and a real page of memory
And the entry mainly consists of the virtual address of the page stored in that real
memory loca on along with the informa on about the process that owns the page.
Though this technique decreases the memory that is needed to store each page table;
but it also increases the me that is needed to search the table whenever a page
reference occurs.
Given below figure shows the address transla on scheme of the Inverted Page Table:
In this, we need to keep the track of process id of each entry, because many processes may have
the same logical addresses.
Also, many entries can map into the same index in the page table a er going through the hash
func on. Thus chaining is used in order to handle this.
A process is divided into Segments. The chunks that a program is divided into which are not
necessarily all of the exact sizes are called segments. Segmenta on gives the user’s view of the
process which paging does not provide. Here the user’s view is mapped to physical memory.
Virtual Memory Segmenta on: Each process is divided into a number of segments, but
the segmenta on is not done all at once. This segmenta on may or may not take place
at the run me of the program.
Simple Segmenta on: Each process is divided into a number of segments, all of which
are loaded into memory at run me, though not necessarily con guously.
There is no simple rela onship between logical addresses and physical addresses in
segmenta on. A table stores the informa on about all such segments and is called Segment
Table.
It maps a two-dimensional Logical address into a one-dimensional Physical address. It’s each
table entry has:
Base Address: It contains the star ng physical address where the segments reside in
memory.
Segment Limit: Also known as segment offset. It specifies the length of the segment.
Advantages of Segmenta on
6. Facilitates modularity.
Disadvantages of Segmenta on
5) What is Demand paging. Explain the steps in handling page fault using appropriate
diagram.
If a page is needed that was not originally loaded up, then a page fault trap is generated.
Steps in Handling a Page Fault :
1. The memory address requested is first checked, to make sure it was a valid memory
request.
2. If the reference is to an invalid page, the process is terminated. Otherwise, if the page is
not present in memory, it must be paged in.
5. A er the page is loaded to memory, the process's page table is updated with the new
frame number, and the invalid bit is changed to indicate that this is now a valid page
reference.
6. The instruc on that caused the page fault must now be restarted from the beginning.
ANS :
Demand paging has both posi ve and nega ve effects on system performance, depending on
the workload and system configura on. Below is an analysis of how it influences performance:
Posi ve Effects
o Programs do not need to load en rely into memory, which reduces ini al loading
me.
3. Scalability:
Nega ve Effects
o A high number of page faults can significantly slow down the system due to the
me required to load pages from secondary storage.
2. Thrashing:
o If the system spends more me handling page faults than execu ng processes, it
leads to thrashing and poor performance.
3. Increased Latency:
o Accessing a page for the first me involves a delay due to the page fault and
subsequent disk I/O.
o Frequent page loading increases the demand on disk I/O, poten ally slowing
down the en re system.
ANS: Thrashing occurs in a virtual memory system when the opera ng system spends more me
handling page faults than execu ng actual processes. This situa on arises when processes
require more memory than the system can provide, leading to constant swapping of pages
between memory and disk.
How to Control Thrashing
o Monitor page fault frequency and adjust the number of processes in the system
dynamically.
o Adopt algorithms like Least Recently Used (LRU) or Op mal Page Replacement
to minimize unnecessary page replacements.
6. Pre-paging:
o Load pages likely to be used together into memory in advance, reducing page
faults.
7. Par oning:
7) Problems:
MODULE-5
1) With Neat diagram, Exlpain two level and three level directory structure.
ANS:
Two-Level Directory
As we have seen, a single level directory o en leads to confusion of files names among different
users. The solu on to this problem is to create a separate directory for each user.
In the two-level directory structure, each user has their own user files directory (UFD). The UFDs
have similar structures, but each lists only the files of a single user. System’s master file directory
(MFD) is searched whenever a new user id is created.
Advantages
The main advantage is there can be more than two files with same name, and would be
very helpful if there are mul ple users.
A security would be there which would prevent user to access other user’s files.
Disadvantages
As there is advantage of security, there is also disadvantage that the user cannot share
the file with the other users.
Unlike the advantage users can create their own files, users don’t have the ability to
create subdirectories.
Scalability is not possible because one user can’t group the same types of files together.
Tree Structure/ Hierarchical Structure
Tree directory structure of opera ng system is most commonly used in our personal computers.
User can create files and subdirectories too, which was a disadvantage in the previous directory
structures.
This directory structure resembles a real tree upside down, where the root directory is at the
peak. This root contains all the directories for each user. The users can create subdirectories and
even store files in their directory.
A user do not have access to the root directory data and cannot modify it. And, even in this
directory the user do not have access to other user’s directories. The structure of tree directory
is given below which shows how there are files and subdirectories in each user’s directory.
Advantages
This directory is more scalable than the other two directory structures explained.
Disadvantages
As the user isn’t allowed to access other user’s directory, this prevents the file sharing
among users.
As the user has the capability to make subdirectories, if the number of subdirectories
increase the searching may become complicated.
ANS:
Structure All files are stored in a single Each user has their own directory,
directory shared by all users. containing their files.
File Naming File names must be unique across File names only need to be unique
the en re system. within each user’s directory.
User Isola on No isola on; all users share the Provides isola on as each user has a
same directory. private directory.
Scalability Not scalable for systems with many More scalable as user directories
users or files. organize files be er.
Complexity Simple to implement and manage. Slightly more complex due to mul ple
directory management.
File Search Slower search due to all files being Faster search as files are grouped in
in a single directory. user-specific directories.
Security Minimal security; users can access Be er security; users cannot access
all files. other users’ directories.
Single-Level Directory
Advantages:
Disadvantages:
Two-Level Directory
Advantages:
Disadvantages:
ANS:
Single-Level Directory
Two-Level Directory
1) Single-Level Directory
The single-level directory is the simplest directory structure. In it, all files are contained in the
same directory which makes it easy to support and understand.
A single level directory has a significant limita on, however, when the number of files increases
or when the system has more than one user. Since all the files are in the same directory, they
must have a unique name. If two users call their dataset test, then the unique name rule
violated.
2) Two-Level Directory
As we have seen, a single level directory o en leads to confusion of files names among different
users. The solu on to this problem is to create a separate directory for each user.
In the two-level directory structure, each user has their own user files directory (UFD). The
UFDs have similar structures, but each lists only the files of a single user. System’s master file
directory (MFD) is searched whenever a new user id is created.
Tree directory structure of opera ng system is most commonly used in our personal computers.
User can create files and subdirectories too, which was a disadvantage in the previous directory
structures.
This directory structure resembles a real tree upside down, where the root directory is at the
peak. This root contains all the directories for each user. The users can create subdirectories and
even store files in their directory.
As we have seen the above three directory structures, where none of them have the capability
to access one file from mul ple directories. The file or the subdirectory could be accessed
through the directory it was present in, but not from the other directory.
This problem is solved in acyclic graph directory structure, where a file in one directory can be
accessed from mul ple directories. In this way, the files could be shared in between the users. It
is designed in a way that mul ple directories point to a par cular directory or file with the help
of links.
5) General-Graph Directory Structure
Unlike the acyclic-graph directory, which avoids loops, the general-graph directory can have
cycles, meaning a directory can contain paths that loop back to the star ng point. This can make
naviga ng and managing files more complex.
ANS:
The alloca on methods define how the files are stored in the disk blocks. There are three main
disk space or file alloca on methods.
Linked Alloca on
Indexed Alloca on
In this scheme, each file occupies a con guous set of blocks on the disk. For example, if a file
requires n blocks and is given a block b as the star ng loca on, then the blocks assigned to the
file will be: b, b+1, b+2,……b+n-1. This means that given the star ng block address and the
length of the file (in terms of blocks required), we can determine the blocks occupied by the file.
The directory entry for a file with con guous alloca on contains
The file ‘mail’ in the following figure starts from the block 19 with length = 6 blocks. Therefore,
it occupies 19, 20, 21, 22, 23, 24 blocks.
Advantages:
Both the Sequen al and Direct Accesses are supported by this. For direct access, the
address of the kth block of the file which starts at block b can easily be obtained as
(b+k).
This is extremely fast since the number of seeks are minimal because of con guous
alloca on of file blocks.
Disadvantages:
This method suffers from both internal and external fragmenta on. This makes it
inefficient in terms of memory u liza on.
Increasing file size is difficult because it depends on the availability of con guous
memory at a par cular instance.
In this scheme, each file is a linked list of disk blocks which need not be con guous. The disk
blocks can be sca ered anywhere on the disk.
The directory entry contains a pointer to the star ng and the ending file block. Each block
contains a pointer to the next block occupied by the file.
The file ‘jeep’ in following image shows how the blocks are randomly distributed. The last block
(25) contains -1 indica ng a null pointer and does not point to any other block.
Advantages:
This is very flexible in terms of file size. File size can be increased easily since the system
does not have to look for a con guous chunk of memory.
This method does not suffer from external fragmenta on. This makes it rela vely be er
in terms of memory u liza on.
Disadvantages:
Because the file blocks are distributed randomly on the disk, a large number of seeks are
needed to access every block individually. This makes linked alloca on slower.
It does not support random or direct access. We can not directly access the blocks of a
file. A block k of a file can be accessed by traversing k blocks sequen ally (sequen al
access ) from the star ng block of the file via block pointers.
3. Indexed Alloca on
In this scheme, a special block known as the Index block contains the pointers to all the blocks
occupied by a file. Each file has its own index block. The ith entry in the index block contains the
disk address of the ith file block. The directory entry contains the address of the index block as
shown in the image:
Advantages:
This supports direct access to the blocks occupied by the file and therefore provides fast
access to the file blocks.
Disadvantages:
The pointer overhead for indexed alloca on is greater than linked alloca on.
For very small files, say files that expand only 2-3 blocks, the indexed alloca on would
keep one en re block (index block) for the pointers which is inefficient in terms of
memory u liza on. However, in linked alloca on we lose the space of only 1 pointer per
block.
ANS:
A file system allows users and applica ons to perform various opera ons on files. These
opera ons can be categorized into basic and advanced func onali es. Below is an overview of
the key file opera ons:
1. File Crea on
Process:
Example: Crea ng a new text file using touch (Linux) or New File (GUI).
2. File Opening
Process:
3. File Reading
Process:
4. File Wri ng
Process:
5. File Closing
Process:
6. File Dele on
Process:
7. File Trunca on
Process:
8. File Renaming
Process:
9. File Appending
Process:
Process:
Process:
Process:
ANS:
File access methods determine how data in a file can be read, wri en, or manipulated. Different
applica ons require different access pa erns, and file systems support these pa erns through
various access methods. Below are the primary access methods:
1. Sequen al Access
Defini on: Data is accessed in a sequen al order, star ng from the beginning of the file
and proceeding in order.
Key Features:
Opera ons:
o write_next(): Writes data at the current posi on and advances the pointer.
Examples:
Advantages:
Disadvantages:
Defini on: Allows data to be accessed at any loca on in the file directly, without
following a sequence.
Key Features:
Opera ons:
Examples:
Advantages:
Disadvantages:
o Complex to implement.
3. Indexed Access
Defini on: Uses an index to keep track of file records, enabling both sequen al and
random access.
Key Features:
Opera ons:
Examples:
Advantages:
Disadvantages:
4. Hierarchical Access
Key Features:
Examples:
Advantages:
Disadvantages:
5. Content-Based Access
Defini on: Data is accessed based on its content rather than its loca on or index.
Key Features:
Examples:
Advantages:
Disadvantages:
7) Explain the access matrix method of system protec on with the domain as objects and
its implementa on.
ANS:
9) Problems