0% found this document useful (0 votes)
3 views

OS Modulewise answers

The document provides an overview of operating systems, defining them as intermediaries between users and computer hardware, and explaining the dual-mode operating system with user and kernel modes. It discusses multiprogramming and time-sharing systems, highlighting their advantages and disadvantages, as well as distinguishing between multiprogramming and multitasking, and multiprocessor and clustered systems. Additionally, it covers types of multiprocessing and clustering, differentiates client-server computing from peer-to-peer computing, and outlines the services provided by operating systems to enhance user experience.

Uploaded by

Mr. LION
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

OS Modulewise answers

The document provides an overview of operating systems, defining them as intermediaries between users and computer hardware, and explaining the dual-mode operating system with user and kernel modes. It discusses multiprogramming and time-sharing systems, highlighting their advantages and disadvantages, as well as distinguishing between multiprogramming and multitasking, and multiprocessor and clustered systems. Additionally, it covers types of multiprocessing and clustering, differentiates client-server computing from peer-to-peer computing, and outlines the services provided by operating systems to enhance user experience.

Uploaded by

Mr. LION
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 49

Module 1

1) Define opera ng Systems. Explain the dual-mode opera ng system with a neat
diagram.
ANS: Opera ng system:
 A program that acts as an intermediary between a user of a computer and the
computer hardware.
 An opera ng System is a collec on of system programs that together control the
opera ons of a computer system.
 Some examples of opera ng systems are Windows, linus, MacOS, Ubuntu.

Dual-mode opera ng system:

 The Dual-Mode taken by most computer systems is to provide hardware support that
allows us to differen ate among various modes of execu on.
 At the very least, we need two separate modes of opera on: user mode and kernel
mode
 A bit, called the mode bit is added to the hardware of the computer to indicate the
current mode: kernel (0) or user (1).
 With the mode bit, we are able to dis nguish between a task that is executed on
behalf of the opera ng system and one that is executed on behalf of the user.
 When the computer system is execu ng on behalf of a user applica on, the system is
in user mode.
 However, when a user applica on requests a service from the opera ng system (via a
system call), it must transi on from user to kernel mode to fulfill the request.
 This is shown in Figure below. As we shall see, this architectural enhancement is
useful for many other aspects of system opera on as well.
 At system boot me, the hardware starts in kernel mode. The opera ng system is
then loaded and starts user applica ons in user mode.
 Whenever a trap or interrupt occurs, the hardware switches from user mode to
kernel mode . Thus, whenever the opera ng system gains control of the computer, it
is in kernel mode.
 The system always switches to user mode (by se ng the mode bit to 1) before
passing control to a user program.
 The dual mode of opera on provides us with the means for protec ng the opera ng
system from errant users-and errant users from one another.
2) Define system program? Explain mul programming and me sharing system.

ANS: System Program : is a so ware designed to provide an environment for applica on


programs and act as an intermediary between the hardware and users.

(I) Mul programming Opera ng System :

Mul programming is the fast switching of the CPU between several programs. A program is
generally made up of several tasks. A task ends with some request to move data which
would require some I/O opera ons to be executed. Mul tasking is commonly used to keep
the CPU busy while the currently running program is doing I/O opera ons. Compared to
other execu ng instruc ons, I/O opera ons are extremely slow.

Even if a program contains a very small number of I/O opera ons, most of the me taken for
the program is spent on those I/O opera ons. Therefore, using this idle me and allowing
another program to u lize the CPU will increase the CPU u liza on. Mul programming was
ini ally developed in the late 1950s as a feature of opera ng systems and was first used in
mainframe compu ng. With the introduc on of virtual memory and virtual machine
technologies, the use of mul programming was enhanced. It has no fixed me slice for
processes. Its main purpose is resource u liza on.

Advantages of Mul programming OS

o No CPU idle me
o Mul programming system can monitor fastest as en re tasks run in parallel.
o Shorter response me
o Maximizes total job throughput of a computer
o Increases resource u liza on
Disadvantages of Mul programming OS

o Some mes long me jobs have to wait a long me


o Tracking of all processes is some mes difficult
o Requires CPU scheduling
o Requires efficient memory management
o No user interac on with any program during execu on
(II) Time-Sharing Opera ng System :

Time-sharing is a technique that enables many people located at various terminals to use a
par cular computer system simultaneously. Time-Sharing is the logical extension of
mul programming. In this me-sharing Opera ng system, many processes are allocated
with computer resources in respec ve me slots. In this, the processor's me is shared with
mul ple users. That's why it is called a me-sharing opera ng system. It has a fixed me
slice for the different processes. Its main purpose is interac ve response me.

The CPU executes mul ple jobs by switching between them, but the switches occur so
frequently. Thus, the user can receive an immediate response. The opera ng system uses
CPU scheduling and mul programming to provide each user with a small amount of me.
Computer systems that were designed primarily as batch systems have been modified to
me-sharing systems.

The main difference between Mul programmed Batch Systems, and Time-Sharing Systems
is that in mul programmed batch systems, the objec ve is to maximize processor use. In
contrast, in Time-Sharing Systems, the objec ve is to minimize response me.

Advantages of Time-Sharing OS

o It provides a quick response

o Reduces CPU idle me

o All the tasks are given a specific me

o Less probability of duplica on of so ware

o Improves response me

o Easy to use and user friendly

Disadvantages of Time-Sharing OS

o It consumes many resources

o Requires high specifica on of hardware

o It has a problem of reliability

o An issue with the security and integrity of user programs and data
3) Dis nguish between the following terms. (i) Mul programming and Mul tasking (ii)
Mul processor System and Clustered System.

ANS : (i) Mul programming vs. Mul tasking

Aspect Mul programming Mul tasking

Defini on Running mul ple programs Allowing mul ple tasks (processes or
concurrently by switching between threads) to execute seemingly
them to maximize CPU u liza on. simultaneously by rapidly switching.

Purpose To improve CPU u liza on by To enhance user interac on by


keeping it busy with another job quickly switching tasks, making them
while one job waits for I/O. appear to run concurrently.

Scope Focuses on system throughput, Focuses on user responsiveness,


typically in batch processing common in modern opera ng
systems. systems.

Execu on Involves processes; one process Can involve processes or threads,


Context executes at a me but switches with rapid context switching for
occur. be er user experience.

User Minimal or none, as it is designed Designed for systems with ac ve


Interac on for background opera ons (e.g., user interac on (e.g., desktop
batch jobs). opera ng systems).

Example Running a compiler and a printer Running a web browser, a media


job simultaneously in batch player, and a word processor at the
processing. same me on a personal computer.

(ii) Mul processor System vs. Clustered System

Aspect Mul processor System Clustered System

Defini on A single system with mul ple CPUs A group of independent computers
(processors) sharing memory and (nodes) working together to perform
other resources. a task.

Architecture Processors are ghtly coupled and Systems are loosely coupled,
share memory, I/O, and clock. connected via a network.

Purpose Improve performance, reliability, Increase scalability, availability, and


and load balancing within a single fault tolerance by using mul ple
system. interconnected systems.
Resource All processors share the same Each node has its own memory and
Sharing physical memory and resources. resources, with coordina on through
communica on protocols.

Cost Typically more expensive due to Cost-effec ve as it uses commodity


specialized hardware integra on. hardware and networking.

Fault Lower fault tolerance since a Higher fault tolerance; failure in one
failure in shared components node may not affect the en re
Tolerance
affects the whole system. cluster.

Example Symmetric Mul processing (SMP) High-performance compu ng (HPC)


systems, like servers with dual or clusters or cloud compu ng
quad-core processors. environments like Kubernetes or
Hadoop.

4) Explain the types of mul processing and types of clustering

ANS:

Types of Mul processing:

1. Symmetric Mul processing (SMP):

 Defini on: All processors share the same memory and have equal access to I/O devices.
Each processor runs tasks independently, but they communicate through shared
memory.

 Features:

o Processors are peers (equal status).

o The system operates under a single opera ng system.

o Efficient load balancing and resource sharing.

 Advantages:

o Simpler to program since all processors have equal access.

o High reliability; if one processor fails, others can con nue working.

 Disadvantages:

o Shared memory can become a bo leneck.

o Scalability is limited by the number of processors.

 Example: Modern mul core CPUs in personal computers and servers.


2. Asymmetric Mul processing (AMP):

 Defini on: A master-slave configura on where one processor (master) controls the
system, and other processors (slaves) handle assigned tasks.

 Features:

o Master processor manages the OS and assigns tasks to slave processors.

o Slave processors perform computa ons or handle specific subsystems.

 Advantages:

o Simple and cost-effec ve system design.

o Effec ve for dedicated tasks, such as embedded systems.

 Disadvantages:

o If the master processor fails, the en re system can halt.

o Less efficient resource u liza on compared to SMP.

 Example: Embedded systems or systems with a dedicated processor for graphics or I/O.

Types of Clustering

1. High-Availability (HA) Clustering:

 Purpose: To ensure con nuous availability of services by providing failover capabili es.

 Features:

o If one node fails, another takes over without interrup ng service.

o O en used for cri cal systems requiring high up me.

 Applica ons: Online banking, stock trading pla orms, or web servers.

 Example: Red Hat Cluster Suite.

2. Load-Balancing Clustering:

 Purpose: To distribute workload across mul ple nodes to improve performance.

 Features:

o Nodes share the load to prevent overloading any single system.

o Ensures efficient use of resources and be er response mes.

 Applica ons: Web servers handling high traffic or e-commerce pla orms.

 Example: Amazon Web Services (AWS) Elas c Load Balancing.


3. High-Performance Compu ng (HPC) Clustering:

 Purpose: To provide immense computa onal power by combining the resources of


mul ple nodes.

 Features:

o Nodes work together to perform a single task, spli ng the computa on.

o Used for parallel processing of large datasets or complex simula ons.

 Applica ons: Scien fic research, weather modeling, and deep learning.

 Example: Supercomputers like IBM’s Blue Gene or clusters using MPI (Message Passing
Interface).

4. Storage Clustering:

 Purpose: To manage large volumes of data by clustering storage devices.

 Features:

o Provides redundancy and improved data access speed.

o O en includes fault-tolerant mechanisms for data reliability.

 Applica ons: Data centers, cloud storage, and big data analy cs.

 Example: Hadoop Distributed File System (HDFS).

5) Differen ate client server compu ng and peer to peer compu ng.

ANS:

Aspect Client-Server Compu ng Peer-to-Peer (P2P) Compu ng

Defini on A centralized model where one or A decentralized model where all


more servers provide services, and nodes (peers) can act as both
clients request them. clients and servers.

Architecture Centralized: Server manages all Decentralized: Each peer can


resources and services, and clients directly interact with others
rely on it. without a central server.

Control Server has full control over Control is distributed among


resources, data, and user peers.
interac ons.

Scalability Limited by the server's capacity; Highly scalable; more peers can
adding more clients may degrade improve resource availability and
performance. sharing.
Reliability Server failure can disrupt the en re More fault-tolerant; failure of one
system unless failover mechanisms or a few peers doesn't disrupt the
are in place. system.

Performance Depends on the server’s processing Performance depends on the


power and bandwidth. collec ve capacity of par cipa ng
peers.

Security Easier to implement security as the More challenging as peers directly


server controls access and interact, increasing vulnerability
resources. to a acks.

Examples - Web applica ons (e.g., Gmail, - File-sharing networks (e.g.,


Facebook). BitTorrent, Gnutella).

Use Cases Suitable for applica ons requiring Suitable for sharing files,
centralized control, such as banking distributed compu ng, and
systems, or enterprise so ware. decentralized networks.

Cost Higher cost due to server Lower cost as no dedicated server


infrastructure and maintenance. is required.

6) Explain opera ng system services with respect to programs the users

ANS:

Opera ng systems can be explored from two viewpoints: the user and the system.

User View: The user's view of the computer varies according to the interface being used.
Most computer users sit in front of a PC, consis ng of a monitor, keyboard, mouse, and
system unit. Such a system is designed for one user to monopolize its resources. The goal is
to maximize the work (or play) that the user is performing. In this case, the opera ng system
is designed mostly for ease of use, with some a en on paid to performance and none paid
to resource u liza on-how various hardware and so ware resources are shared.
Performance is, of course, important to the user; but rather than resource u liza on, such
systems are op mized for the single-user experience.

System View: From the computer's point of view, the opera on system is the program most
in mately involved with the hardware. In this context, we can view an opera ng system as a
resource allocator. A computer system has many resources that may be 1. required to solve
a problem: CPU me, memory space, file-storage space, I/O devices, and so on. The
opera ng system acts as the manager of these resources.

A control program manages the execu on of user programs to prevent errors and improper
use of the computer. It is especially concerned with the opera on and control of I/O
devices.
Following are the six services provided by opera ng systems to the convenience of the
users.

1. User interface: Almost all opera ng systems have a user interface (UI). This interface can
take several forms. One is a command-line interface (CLI) and other is a graphical user
interface (GUI) is used.

2. Program Execu on: The purpose of computer systems is to allow the user to 1. execute
programs. So the opera ng system provides an environment where the user can
conveniently run programs.

3. I/O Opera ons: Each program requires an input and produces output. This involves the
use of I/O. So the opera ng systems are providing I/O makes it convenient for the users to
run programs.

4. File System Manipula on: The output of a program may need to be wri en into new files
or input taken from some files. The opera ng system provides this service. Finally, some
programs include permissions management to allow or deny access to files or directories
based on file ownership.

5. Communica ons: The processes need to communicate with each other to exchange
informa on during execu on. It may be between processes running on the same computer
or running on the different computers. Communica ons can be occur in two ways: (i) shared
memory or (ii) message passing

6. Error Detec on: An error is one part of the system may cause malfunc oning of the
complete system. To avoid such a situa on opera ng system constantly monitors the system
for detec ng the errors. This relieves the user of the worry of errors propaga ng to various
part of the system and causing malfunc oning.

Following are the three services provided by opera ng systems for ensuring the efficient
opera on of the system itself.

1. Resource alloca on 2. Accoun ng 3. Protec on

7) With a neat diagram, explain the concept of the virtual machine

ANS :

 A virtual machine takes the layered approach to its logical conclusion. It treats
hardware and the opera ng system kernel as though they were all hardware.
 A virtual machine provides an interface iden cal to the underlying bare hardware.
 The opera ng system creates the illusion of mul ple processes, each execu ng on
its own processor with its own (virtual) memory.
 The resources of the physical computer are shared to create the virtual machines. ✦
CPU scheduling can create the appearance that users have their own processor. ✦
Spooling and a file system can provide virtual card readers and virtual line printers.
✦ A normal user me-sharing terminal serves as the virtual machine operator’s
console.

Advantages/Disadvantages of Virtual Machines


 The virtual-machine concept provides complete protec on of system resources
since each virtual Machine is isolated from all other virtual machines. This isola on,
however, permits no direct sharing of resources.
 A virtual-machine system is a perfect vehicle for opera ng-systems research and
development. System development is done on the virtual machine, instead of on a
physical machine and so does not disrupt normal system opera on.
 The virtual machine concept is difficult to implement due to the effort required to
provide an exact duplicate to the underlying machine.

8) What is system call. Explain the types of system call.

ANS:

 System calls provide an interface between the process and the opera ng system.
 System calls allow user-level processes to request some services from the opera ng
system which process itself is not allowed to do.
 For example, for I/O a process involves a system call telling the opera ng system to
read or write par cular area and this request is sa sfied by the opera ng system.
 System calls can be grouped roughly into five major categories: process control, file
manipula on, device manipula on, informa on maintenance, and communica ons.

 Process control: A running program needs to be able to halt its execu on either
normally (end) or abnormally (abort). If a system call is made to terminate the
currently running program abnormally, or if the program runs into a problem and
causes an error trap, a dump of memory is some mes taken and an error message
generated.
 File management: We first need to be able to create and delete files. Either system
call requires the name of the file and perhaps some of the file's a ributes. Once the
file is created, we need to open it and to use it. We may also read, write, or
reposi on (rewinding or skipping to the end of the file, for example). Finally, we
need to close the file, indica ng that we are no longer using it.

 Device management: A process may need several resources to execute-main


memory, disk drives, access to files, and so on. If the resources are available, they
can be granted, and control can be returned to the user process. Otherwise, the
process will have to wait un l sufficient resources are available.

 Informa on Maintenance: Many system calls exist simply for the purpose of
transferring informa on between the user program and the opera ng system. For
example, most systems have a system call to return the current me and date. Other
system calls may return informa on about the system, such as the number of
current users, the version number of the opera ng system, the amount of free
memory or disk space, and so on.

 Communica on: There are two common models of interprocess communica on: the
message passing model and the shared-memory model. In the message-passing
model, the communica ng processes exchange messages with one another to
transfer informa on. Messages can be exchanged between the processes either
directly or indirectly through a common mailbox.
MODULE 2
1) What is process. Explain the different states of a process with a state diagram and
control block.

ANS: PROCESS:

A process is an instance of a program in execu on. It is a fundamental concept in opera ng


systems, represen ng the ac ve execu on of code along with its required resources such as
memory, CPU, and I/O.

Process State :

As a process executes, it changes state. The state of a process is defined in part by the current
ac vity of that process.

Each process may be in one of the following states:

 New State: The process is being created.


 Running State: A process is said to be running if it has the CPU, that is, process actually
using the CPU at that par cular instant.
 Blocked (or wai ng) State: A process is said to be blocked if it is wai ng for some event
to happen such that as an I/O comple on before it can proceed. Note that a process is
unable to run un l some external event happens.
 Terminated state: The process has finished execu on. Process Control Block (PCB)
 Ready State: A process is said to be ready if it needs a CPU to execute. A ready state
process is runnable but temporarily stopped running to let another process run.

Process Control Block (PCB) :Each process is represented in the opera ng system by a
process control block (PCB)-also called a task control block. A PCB is shown in Figure below.
It contains many pieces of informa on associated with a specific process, including these:
 Process state
 Program counter
 CPU registers
 CPU scheduling informa on
 Memory-management informa on
 Accoun ng informa on
 I/O status informa on

Process state: The state may be new, ready, running, wai ng, halted, and SO on.
Program counter: The counter indicates the address of the next instruc on to be
executed for this process.

CPU registers: The registers vary in number and type, depending on the computer
architecture. They include accumulators, index registers, stack pointers, and general
purpose registers, plus any condi on-code informa on.

CPU-scheduling informa on: This informa on includes a process priority, pointers to


scheduling queues, and any other scheduling parameters.

Memory-management informa on: This informa on may include such informa on as


the value of the base and limit registers, the page tables, or the segment tables,
depending on the memory system used by the opera ng system.

Accoun ng informa on: This informa on includes the amount of CPU and real me
used, me limits, account numbers, job or process numbers, and so on.

Status informa on: The informa on includes the list of I/O devices allocated to this
process, a list of open files, and so on.

2) What is inter-process communica on? Discuss message passing and the shared memory
concept of IPC.

Answer:

 Concurrent execu on of coopera ng processes requires mechanisms that allow


processes to communicate with one another and to synchronize their ac ons.
 Coopera ng processes require an interprocess communica on (IPC) mechanism that
will allow them to exchange data and informa on.
 There are two fundamental models of interprocess communica on:
(1) shared memory
(2) message passing.
 In the shared-memory model, a region of memory that is shared by coopera ng
processes is established.
 Processes can then exchange informa on by reading and wri ng data to the shared
region. In the message-passing model, communica on takes place by means of
messages exchanged between the coopera ng processes.
 The two communica ons models are contrasted in Figure below.

Shared-Memory Systems
 Interprocess communica on using shared memory requires communica ng
processes to establish a region of shared memory.
 Typically, a shared-memory region resides in the address space of the process
crea ng the shared-memory segment.
 Other processes that wish to communicate using this shared-memory segment must
a ach it to their address space.
 Shared memory requires that two or more processes agree to remove this restric on.
 They can then exchange informa on by reading and wri ng data in the shared areas.
 Two types of buffers can be used. The unbounded buffer places no prac cal limit on
the size of the buffer.
 The consumer may have to wait for new items, but the producer can always produce
new items.
 The bounded buffer assumes a fixed buffer size.
 In this case, the consumer must wait if the buffer is empty, and the producer must
wait if the buffer is full.

Message-Passing Systems
 Message passing provides a mechanism to allow processes to communicate and to
synchronize their ac ons without sharing the same address space and is par cularly
useful in a distributed environment, where the communica ng processes may reside
on different computers connected by a network.
 A message-passing facility provides at least two opera ons: send (message) and
receive (message).
 Messages sent by a process can be of either fixed or variable size. If only fixed-sized
messages can be sent, the system-level implementa on is straigh orward.
 This restric on, however, makes the task of programming more difficult. If processes
P and Q want to communicate, they must send messages to and receive messages
from each other; a communica on link must exist between them.
 Here are several methods for logically implemen ng a link and the send() / receive()
opera ons:
• Direct or indirect communica on
• Synchronous or asynchronous communica on
• Automa c or explicit buffering

3) What is Mul Threaded process? Explain four benefits of mul threaded programming.

ANS:

Mul threading is a crucial concept in modern compu ng that allows mul ple threads to
execute concurrently, enabling more efficient u liza on of system resources.

1. Responsiveness

Mul threading in an interac ve applica on may allow a program to con nue running even if a
part of it is blocked or is performing a lengthy opera on, thereby increasing responsiveness to
the user. In a non mul threaded environment, a server listens to the port for some request and
when the request comes, it processes the request and then resume listening to another
request.

2. Resource Sharing

Processes may share resources only through techniques such as- Such techniques must be
explicitly organized by programmer. However, threads share the memory and the resources of
the process to which they belong by default. The benefit of sharing code and data is that it
allows an applica on to have several threads of ac vity within same address space.

 Message Passing

 Shared Memory

3. Economy

Alloca ng memory and resources for process crea on is a costly job in terms of me and space.
Since, threads share memory with the process it belongs, it is more economical to create and
context switch threads. Generally much more me is consumed in crea ng and managing
processes than in threads. In Solaris, for example, crea ng process is 30 mes slower than
crea ng threads and context switching is 5 mes slower.

4. Scalability

The benefits of mul -programming greatly increase in case of mul processor architecture,
where threads may be running parallel on mul ple processors. If there is only one thread then it
is not possible to divide the processes into smaller tasks that different processors can perform.
Single threaded process can run only on one processor regardless of how many processors are
available. Mul -threading on a mul ple CPU machine increases parallelism.

5. Be er Communica on System

To improve the inter-process communica on, thread synchroniza on func ons can be used.
Also, when need to share huge amounts of data across mul ple threads of execu on inside the
same address space then provides extremely high bandwidth and low communica on across
the various tasks within the applica on.

6. Microprocessor Architecture U liza on

Every thread could be execute in parallel on a dis nct processor which might be considerably
amplified in a microprocessor architecture. Mul threading enhances concurrency on a mul
CPU machine. Also the CPU switches among threads very quickly in a single processor
architecture where it creates the illusion of parallelism, but at a par cular me only one thread
can running.

7. Minimized system resource usage

Threads have a minimal influence on the system’s resources. The overhead of crea ng,
maintaining, and managing threads is lower than a general process.

8. Enhanced Concurrency

Mul threading can enhance the concurrency of a mul -CPU machine. This is because the
mul threading allows every thread to be executed in parallel on a dis nct processor.

9. Reduced Context Switching Time

The threads minimize the context switching me as in Thread Context Switching, the virtual
memory space remains the same.

4) Discuss in detail the mul threading model, its advantages and disadvantages with suitable
illustra on.

ANS:

Many-to-One Model:

The many-to-one model (Figure below) maps many user-level threads to one kernel thread.
Thread management is done by the thread library in user space, so it is efficient; but the en re
process will block if a thread makes a blocking system call.

Also, because only one thread can access the kernel at a me, mul ple threads are unable to run
in parallel on mul processors. Green threads-a thread library available for Solaris-uses this
model, as does GNU Portable Threads.
One-to-One Model:

The one-to-one model (Figure below) maps each user thread to a kernel thread. It provides more
concurrency than the many-to-one model by allowing another thread to run when a thread
makes a blocking system call; it also allows mul ple threads to run in parallel on mul processors.
The only drawback to this model is that crea ng a user thread requires crea ng the
corresponding kernel thread.

Because the overhead of crea ng kernel threads can burden the performance of an applica on,
most implementa ons of this model restrict the number of threads supported by the system.
Linux, along with the family of Windows opera ng systems including Windows 95, 98, NT, 2000,
and XP implement the one-to-one model.

Many-to-Many Model:

The many-to-many model (Figure below) mul plexes many user-level threads to a smaller or
equal number of kernel threads. The number of kernel threads may be specific to either a
par cular applica on or a par cular machine (an applica on may be allocated more kernel
threads on a mul processor than on a uniprocessor).
One popular varia on on the many-to-many model s ll mul plexes many user level threads to a
smaller or equal number of kernel threads but also allows a user-level thread to be bound to a
kernel thread. This varia on, some mes referred to as the two level model (Figure below), is
supported by opera ng systems such as IRIX, HP-UX, and Tru64 UNIX.

5) Explain five different scheduling criteria used in the compu ng scheduling mechanism.

Answer::

• CPU u liza on.: We want to keep the CPU as busy as possible. Conceptually, CPU u liza on
can range from 0 to 100 percent.

• Throughput:. If the CPU is busy execu ng processes, then work is being done. One measure of
work is the number of processes that are completed per me unit, called throughput.

• Turnaround me:. From the point of view of a par cular process, the important criterion is
how long it takes to execute that process. The interval from the me of submission of a process
to the me of comple on is the turnaround me.

• Wai ng me:. The CPU scheduling algorithm does not affect the amount of me during which
a process executes or does I/O; it affects only the amount of me that a process spends wai ng
in the ready queue.

• Response me:. In an interac ve system, turnaround me may not be the best criterion. O en,
a process can produce some output fairly early and can con nue compu ng new results while
previous results are being output to the user. Thus, another measure is the me from the
submission of a request un l the first response is produced. This measure, called response me,
is the me it takes to start responding, not the me it takes to output the response. The
turnaround me is generally limited by the speed of the output device.

6) Problems using Gant Chart…………..


MODULE -3
1) Illustrate Peterson’s solu on for the cri cal sec on problem.
Answer :
 A classic so ware-based solu on to the cri cal-sec on problem known as Peterson's
solu on.
 Peterson's solu on is restricted to two processes that alternate execu on between their
cri cal sec ons and remainder sec ons. The processes are numbered P0 and Pl.
 For convenience, when presen ng Pi, we use Pj to denote the other process; that is, j
equals 1 - i. Peterson's solu on requires two data items to be shared between the two
processes:
int turn;
boolean flag[2];
 The variable turn indicates whose turn it is to enter its cri cal sec on. That is, if turn ==
i, then process Pi is allowed to execute in its cri cal sec on. The flag array is used to
indicate if a process is ready to enter its cri cal sec on.
 For example, if flag [i] is true, this value indicates that Pi is ready to enter its cri cal
sec on. With an explana on of these data structures complete, we are now ready to
describe the algorithm shown in Figure below.

 To enter the cri cal sec on, process Pi first sets flag [i] to be true and then sets turn to
the value j, thereby asser ng that if the other process wishes to enter the cri cal sec on
it can do so. If both processes try to enter at the same me, turn will be set to both i and
j at roughly the same me. Only one of these assignments will last; the other will occur,
but will be overwri en immediately.
 To prove property Mutual exclusion is preserved, we note that each Pi enters its cri cal
sec on only if either flag [j] == false or turn == i.
 Also note that, if both processes can be execu ng in their cri cal sec ons at the same
me, then flag [i] ==flag [j] == true.
 These two observa ons imply that P0 and P1 could not have successfully executed their
while statements at about the same me, since the value of turn can be either 0 or 1,
but cannot be both.
2) Illustrate how readers and writers problem can be solved using semaphores.

ANS: Readers-Writers Problem

• A data set is shared among a number of concurrentprocesses

• Readers – only read the data set; they do not perform anyupdates

• Writers – can both read andwrite.

• Problem – allow mul ple readers to read at the same me. Only one single writer can access
the shared data at the same me.

• SharedData

• Dataset

• Semaphore mutexini alized to 1.

• Semaphore wr ni alized to 1.

• Integer readcoun ni alized to 0.

3) What is Semaphores? State a Dining Philosophers problem give a solu on using


Semaphores.

Ans:
 Semaphore is Synchroniza on tool is used to solve various synchroniza on problem and
can be implemented effieciently.
 It do not require Busywai ng.
 It has 2 opera on:Write() and Signal():
 Semaphore have 2 types: 1. Binary semaphores 2. Coun ng semaphores

Dining-Philosophers Problem

Consider five philosophers who spend their lives thinking and ea ng. The philosophers
share a circular table surrounded by five chairs, each belonging to one philosopher. In the
center of the table is a bowl of rice, and the table is laid with five singlechops cks.

A philosopher gets hungry and tries to pick up the two chops cks that are closest to her (the
chops cks that are between her and her le and right neighbors). A philosopher may pick
up only one chops ck at a me. When a hungry philosopher has both her chops cks at the
same me, she eats without releasing the chops cks. When she is finished ea ng, she puts
down both chops cks and starts thinkingagain. It is a simple representa on of the need to
allocate several resources among several processes in a deadlock-free and starva on-
freemanner.

Solu on:

One simple solu on is to represent each chops ck with a semaphore. A philosopher tries to
grab a chops ck by execu ng a wait() opera on on thatsemaphore. She releases her
chops cks by execu ng the signal() opera on on the appropriate semaphores. Thus, the
shared data are

semaphore chops ck[5];

where all the elements of chops ck are ini alized to 1. The structure of philosopher iis
shown
Several possible remedies to the deadlock problem are replaced by

 Allow at most four philosophers to be si ng simultaneously at thetable.


 Allowaphilosophertopickupherchops cksonlyi othchops cksareavailable.
 Use an asymmetric solu on—that is, an odd-numbered philosopher picks up first her
le chops ck and then her right chops ck, whereas an even numbered philosopher
picks up her right chops ck and then her le chops ck.

4) what ae 2 methods to elimniate deadlock.

Ans:

1. Deadlock Preven on

This method avoids the condi ons that lead to deadlocks by systema cally ensuring that
at least one of the necessary condi ons for deadlock cannot occur.

Necessary Condi ons for Deadlock:

1. Mutual Exclusion: Only one process can use a resource at a me.

2. Hold and Wait: A process holding resources can request addi onal resources.

3. No Preemp on: Resources cannot be forcibly taken from a process.

4. Circular Wait: A set of processes form a cycle, each wai ng for a resource held by the
next process in the cycle.

Advantages:

 Proac vely prevents deadlocks.

 Simplifies resource alloca on policies.

Disadvantages:

 May lead to inefficient resource u liza on.

 Requires a priori knowledge of resource requirements.

2. Deadlock Avoidance

This method ensures the system avoids unsafe states that could lead to deadlocks by
dynamically analyzing resource alloca on.

Banker's Algorithm:

 A well-known deadlock avoidance algorithm.

 Assesses whether gran ng a resource request will keep the system in a safe state.

 Safe State: A state where the system can allocate resources to processes in some order
without leading to deadlock.
 Unsafe State: A state that could poten ally lead to deadlock.

Steps in Deadlock Avoidance:

1. When a process requests a resource, check if fulfilling the request will leave the system
in a safe state.

2. If yes, grant the request. If no, make the process wait.

Advantages:

 Avoids deadlocks while allowing flexibility in resource alloca on.

 Reduces unnecessary delays compared to preven on.

Disadvantages:

 Computa onal overhead due to frequent state checks.

 Requires knowledge of maximum resource requirements in advance.

5)  Explain differenr methods to recover Deadlocks

ANS:

1. Process Termina on

This method involves termina ng processes to break the circular wait condi on and free
resources. There are two approaches to process termina on:

(a) Abort All Deadlocked Processes

 Terminate all processes involved in the deadlock.

 This guarantees recovery but can result in significant work loss.

(b) Abort One Process at a Time

 Terminate one process at a me and recheck for deadlock a er each termina on.

 This is a gradual approach and minimizes disrup on but can be me-consuming.

2. Resource Preemp on

This method involves forcibly reclaiming resources from processes to break the
deadlock.

Steps in Resource Preemp on:

1. Select a Process: Choose a process and preempt its resources.

2. Rollback: Roll back the process to a safe state, undoing its recent opera ons.
3. Reallocate Resources: Assign the preempted resources to other processes.

3. Process Rollback

 Involves rever ng a process to an earlier checkpoint or safe state.

 The checkpoint is a saved state of the process's execu on.

 A er rollback, the process can restart from the checkpoint without holding the
resources it previously used.

Advantages:

 Less disrup ve compared to termina on.

 Ensures par al progress of the process is not lost.

Disadvantages:

 Requires addi onal overhead to maintain checkpoints.

 May not be feasible if checkpoints are not implemented.

4. Wait-for Graph Reduc on

 Used when a deadlock occurs in a system with minimal resource types.

 The Wait-for Graph (a directed graph showing which processes are wai ng for resources
held by other processes) is analyzed to iden fy the deadlock cycle.

 Break the cycle by:

o Termina ng a process.

o Preemp ng resources.

5. Manual Interven on

 In some systems, especially less automated environments, administrators manually


detect and resolve deadlocks by killing processes or realloca ng resources.

5) Bankers algorithm problems


MODULE-4
1) What is paging? Illustrate with Internal fragmenta on and external fragmenta on.

Ans:

Paging is a memory management technique used by opera ng systems to divide the process's
memory and the physical memory into fixed-size blocks. These blocks are called pages (in
logical memory) and frames (in physical memory). Paging helps manage memory efficiently
and eliminates external fragmenta on by alloca ng memory in fixed-size units.

Key Concepts in Paging

1. Page: Fixed-size block of the process's logical memory.

2. Frame: Fixed-size block of physical memory, same size as a page.

3. Page Table: A data structure that maps pages to frames.

When a program accesses memory, the opera ng system translates the logical address into a
physical address using the page table.

Internal Fragmenta on

Defini on: Internal fragmenta on occurs when allocated memory has unused space within a
page or block because the requested memory is smaller than the block size.

Example:

 Assume the page size is 4 KB.

 A process requests 6 KB of memory. It will require 2 pages (4 KB each).

 The second page will have 2 KB of unused space, resul ng in internal fragmenta on.

Illustra on:

Page Requested Memory Page Size Wasted Space

Page 1 4 KB 4 KB 0 KB

Page 2 2 KB 4 KB 2 KB

Total Internal Fragmenta on: 2 KB.

External Fragmenta on

Defini on: External fragmenta on occurs when free memory is available, but it is fragmented
into non-con guous blocks, making it unusable for larger memory requests.
Example (Without Paging):

 Assume there is 12 KB of free memory split into blocks of 4 KB, 2 KB, and 6 KB.

 A process requests 8 KB of memory.

 Even though there is enough total memory (12 KB), the request cannot be sa sfied
because the memory is fragmented into smaller blocks.

Illustra on:

Block Size Free?

Block 1 4 KB Yes

Block 2 2 KB Yes

Block 3 6 KB Yes

Result: The 8 KB request fails due to external fragmenta on.

2) What is Transla on Lookaside buffer(TLB)? Explain TLB in detail with a paging system with
a neat diagram.

Ans:

Transla on Look aside Buffer

• A special, small, fast lookup hardware cache, called a transla on look-aside buffer (TLB).

• Each entry in the TLB consists of two parts: a key (or tag) and a value.

• When the associa ve memory is presented with an item, the item is compared with all keys
simultaneously. If the item is found, the corresponding value field is returned. The search is
fast; the hardware, however, is expensive. Typically, the number of entries in a TLB is small,
o en numbering between 64 and 1,024.

• The TLB contains only a few of the page-table entries.

Working:

• When a logical-address is generated by the CPU, its page-number is presented to the TLB.

• If the page-number is found (TLB hit), its frame-number is immediately available and used to
access memory

• If page-number is not in TLB (TLB miss), a memory-reference to page table must be made.
The obtained frame-number can be used to access memory (Figure 1)

• In addi on, we add the page-number and frame-number to the TLB, so that they will be
found quickly on the next reference.

• If the TLB is already full of entries, the OS must select one for replacement.

• Percentage of mes that a par cular page-number is found in the TLB is called hit ra o.
Advantage: Search opera on is fast.

Disadvantage: Hardware is expensive.

• Some TLBs have wired down entries that can't be removed.

• Some TLBs store ASID (address-space iden fier) in each entry of the TLB that uniquely
iden fy each process and provide address space protec on for that process.

3) Explain the structure of Page table with suitable diagrams.

ANS:

Some of the common techniques that are used for structuring the Page table are as follows:

1. Hierarchical Paging

2. Hashed Page Tables

3. Inverted Page Tables

Let us cover these techniques one by one;

Hierarchical Paging
Another name for Hierarchical Paging is mul level paging.

 There might be a case where the page table is too big to fit in a con guous space, so we
may have a hierarchy with several levels.

 In this type of Paging the logical address space is broke up into Mul ple page tables.

 Hierarchical Paging is one of the simplest techniques and for this purpose, a two-level
page table and three-level page table can be used.

Two Level Page Table

Consider a system having 32-bit logical address space and a page size of 1 KB and it is further
divided into:
 Page Number consis ng of 22 bits.

 Page Offset consis ng of 10 bits.

As we page the Page table, the page number is further divided into :

 Page Number consis ng of 12 bits.

 Page Offset consis ng of 10 bits.

Thus the Logical address is as follows:

In the above diagram,

P1 is an index into the Outer Page table.

P2 indicates the displacement within the page of the Inner page Table.

As address transla on works from outer page table inward so is known as forward-mapped
Page Table.

Below given figure below shows the Address Transla on scheme for a two-level page table

Three Level Page Table

For a system with 64-bit logical address space, a two-level paging scheme is not appropriate. Let
us suppose that the page size, in this case, is 4KB.If in this case, we will use the two-page level
scheme then the addresses will look like this:

Thus in order to avoid such a large table, there is a solu on and that is to divide the outer page
table, and then it will result in a Three-level page table:
Hashed Page Tables:
This approach is used to handle address spaces that are larger than 32 bits.

 In this virtual page, the number is hashed into a page table.

 This Page table mainly contains a chain of elements hashing to the same elements.

Each element mainly consists of :

1. The virtual page number

2. The value of the mapped page frame.

3. A pointer to the next element in the linked list.

Given below figure shows the address transla on scheme of the Hashed Page Table:

The above Figure shows Hashed Page Table

The Virtual Page numbers are compared in this chain searching for a match; if the match is
found then the corresponding physical frame is extracted.

Inverted Page Tables:


The Inverted Page table basically combines A page table and A frame table into a single data
structure.

 There is one entry for each virtual page number and a real page of memory

 And the entry mainly consists of the virtual address of the page stored in that real
memory loca on along with the informa on about the process that owns the page.
 Though this technique decreases the memory that is needed to store each page table;
but it also increases the me that is needed to search the table whenever a page
reference occurs.

Given below figure shows the address transla on scheme of the Inverted Page Table:

In this, we need to keep the track of process id of each entry, because many processes may have
the same logical addresses.

Also, many entries can map into the same index in the page table a er going through the hash
func on. Thus chaining is used in order to handle this.

4) What is Segmenta on. Explain with neat diagram.

Ans: Segmenta on:

A process is divided into Segments. The chunks that a program is divided into which are not
necessarily all of the exact sizes are called segments. Segmenta on gives the user’s view of the
process which paging does not provide. Here the user’s view is mapped to physical memory.

Types of Segmenta on in Opera ng Systems

 Virtual Memory Segmenta on: Each process is divided into a number of segments, but
the segmenta on is not done all at once. This segmenta on may or may not take place
at the run me of the program.

 Simple Segmenta on: Each process is divided into a number of segments, all of which
are loaded into memory at run me, though not necessarily con guously.

There is no simple rela onship between logical addresses and physical addresses in
segmenta on. A table stores the informa on about all such segments and is called Segment
Table.

What is Segment Table?

It maps a two-dimensional Logical address into a one-dimensional Physical address. It’s each
table entry has:

 Base Address: It contains the star ng physical address where the segments reside in
memory.
 Segment Limit: Also known as segment offset. It specifies the length of the segment.

Advantages of Segmenta on

1. Logical view of memory.

2. Efficient use of memory (reduces internal fragmenta on).

3. Supports dynamic memory alloca on.

4. Provides protec on and isola on.

5. Simplifies sharing and reloca on.

6. Facilitates modularity.

7. Improves performance for variable-sized memory alloca on.

Disadvantages of Segmenta on

1. External fragmenta on.

2. Complex memory management.

3. Overhead of segment table.

4. Segment size limita ons.

5. Slower access compared to paging.

6. Requires memory compac on.

7. Lack of uniformity in memory layout.

5) What is Demand paging. Explain the steps in handling page fault using appropriate
diagram.

Ans: Demand Paging :


Demand Paging is a memory management technique in which pages of a program are loaded
into physical memory only when they are needed during execu on. This approach minimizes
the amount of memory used by a process and reduces the me required for program loading.

Handling Page fault :

If a page is needed that was not originally loaded up, then a page fault trap is generated.
Steps in Handling a Page Fault :

1. The memory address requested is first checked, to make sure it was a valid memory
request.

2. If the reference is to an invalid page, the process is terminated. Otherwise, if the page is
not present in memory, it must be paged in.

3. A free frame is located, possibly from a free-frame list.

4. A disk opera on is scheduled to bring in the necessary page from disk.

5. A er the page is loaded to memory, the process's page table is updated with the new
frame number, and the invalid bit is changed to indicate that this is now a valid page
reference.

6. The instruc on that caused the page fault must now be restarted from the beginning.

6) Illustrate how demand paging affects the system performance.

ANS :

Impact of Demand Paging on System Performance

Demand paging has both posi ve and nega ve effects on system performance, depending on
the workload and system configura on. Below is an analysis of how it influences performance:
Posi ve Effects

1. Efficient Memory U liza on:

o Only the necessary pages are loaded, reducing memory usage.

o Allows more processes to run concurrently, improving overall system throughput.

2. Faster Program Startup:

o Programs do not need to load en rely into memory, which reduces ini al loading
me.

3. Scalability:

o Enables execu on of programs larger than physical memory by loading pages on


demand.

4. Reduced I/O Overhead:

o Eliminates unnecessary I/O opera ons for unused pages.

Nega ve Effects

1. Page Fault Overhead:

o A high number of page faults can significantly slow down the system due to the
me required to load pages from secondary storage.

2. Thrashing:

o If the system spends more me handling page faults than execu ng processes, it
leads to thrashing and poor performance.

3. Increased Latency:

o Accessing a page for the first me involves a delay due to the page fault and
subsequent disk I/O.

4. Disk I/O Bo leneck:

o Frequent page loading increases the demand on disk I/O, poten ally slowing
down the en re system.

6) what is thrashing. How it can be Controlled.

ANS: Thrashing occurs in a virtual memory system when the opera ng system spends more me
handling page faults than execu ng actual processes. This situa on arises when processes
require more memory than the system can provide, leading to constant swapping of pages
between memory and disk.
How to Control Thrashing

1. Working Set Model:

o Monitor the working set (recently used pages) of each process.

o Ensure enough memory is allocated to hold the working sets.

2. Adjust Mul programming Level:

o Reduce the number of processes running simultaneously to ensure sufficient


memory for each process.

3. Page Fault Frequency (PFF) Control:

o Monitor page fault frequency and adjust the number of processes in the system
dynamically.

o If PFF is high, reduce the number of processes or allocate more memory.

4. Use a Be er Page Replacement Algorithm:

o Adopt algorithms like Least Recently Used (LRU) or Op mal Page Replacement
to minimize unnecessary page replacements.

5. Increase Physical Memory:

o Adding more RAM reduces reliance on disk-based virtual memory.

6. Pre-paging:

o Load pages likely to be used together into memory in advance, reducing page
faults.

7. Par oning:

o Divide memory among processes effec vely to balance memory demands.

7) Problems:
MODULE-5
1) With Neat diagram, Exlpain two level and three level directory structure.

ANS:

Two-Level Directory

As we have seen, a single level directory o en leads to confusion of files names among different
users. The solu on to this problem is to create a separate directory for each user.

In the two-level directory structure, each user has their own user files directory (UFD). The UFDs
have similar structures, but each lists only the files of a single user. System’s master file directory
(MFD) is searched whenever a new user id is created.

Two-Levels Directory Structure

Advantages

 The main advantage is there can be more than two files with same name, and would be
very helpful if there are mul ple users.

 A security would be there which would prevent user to access other user’s files.

 Searching of the files becomes very easy in this directory structure.

Disadvantages

 As there is advantage of security, there is also disadvantage that the user cannot share
the file with the other users.

 Unlike the advantage users can create their own files, users don’t have the ability to
create subdirectories.

 Scalability is not possible because one user can’t group the same types of files together.
Tree Structure/ Hierarchical Structure

Tree directory structure of opera ng system is most commonly used in our personal computers.
User can create files and subdirectories too, which was a disadvantage in the previous directory
structures.

This directory structure resembles a real tree upside down, where the root directory is at the
peak. This root contains all the directories for each user. The users can create subdirectories and
even store files in their directory.

A user do not have access to the root directory data and cannot modify it. And, even in this
directory the user do not have access to other user’s directories. The structure of tree directory
is given below which shows how there are files and subdirectories in each user’s directory.

Tree/Hierarchical Directory Structure

Advantages

 This directory structure allows subdirectories inside a directory.

 The searching is easier.

 File sor ng of important and unimportant becomes easier.

 This directory is more scalable than the other two directory structures explained.

Disadvantages

 As the user isn’t allowed to access other user’s directory, this prevents the file sharing
among users.

 As the user has the capability to make subdirectories, if the number of subdirectories
increase the searching may become complicated.

 Users cannot modify the root directory data.

 If files do not fit in one, they might have to be fit into other directories.


2) Dis nguish bw single level directory and two level directory with advantages and
disadvantages.

ANS:

Aspect Single-Level Directory Two-Level Directory

Structure All files are stored in a single Each user has their own directory,
directory shared by all users. containing their files.

File Naming File names must be unique across File names only need to be unique
the en re system. within each user’s directory.

User Isola on No isola on; all users share the Provides isola on as each user has a
same directory. private directory.

Scalability Not scalable for systems with many More scalable as user directories
users or files. organize files be er.

Complexity Simple to implement and manage. Slightly more complex due to mul ple
directory management.

File Search Slower search due to all files being Faster search as files are grouped in
in a single directory. user-specific directories.

Security Minimal security; users can access Be er security; users cannot access
all files. other users’ directories.

Advantages and Disadvantages

Single-Level Directory

Advantages:

1. Simple and easy to implement.

2. Requires minimal overhead for directory management.

3. Easy to locate files in small systems.

Disadvantages:

1. No file name duplica on allowed, causing conflicts in larger systems.

2. Poor organiza on of files, making it hard to locate specific ones.

3. Lack of user isola on and security.

Two-Level Directory

Advantages:

1. Provides user isola on; each user manages their directory.


2. Allows duplicate file names in different user directories.

3. Improves file organiza on and search efficiency.

Disadvantages:

1. Slightly more complex to implement and manage.

2. Requires addi onal overhead to maintain user-specific directories.

3. Cross-user file sharing requires addi onal mechanisms.

3) Discuss various directory structures with neat diagrams.

ANS:

 Single-Level Directory

 Two-Level Directory

 Tree Structure/ Hierarchical Structure

 Acyclic Graph Structure

 General-Graph Directory Structure

1) Single-Level Directory

The single-level directory is the simplest directory structure. In it, all files are contained in the
same directory which makes it easy to support and understand.

A single level directory has a significant limita on, however, when the number of files increases
or when the system has more than one user. Since all the files are in the same directory, they
must have a unique name. If two users call their dataset test, then the unique name rule
violated.

2) Two-Level Directory

As we have seen, a single level directory o en leads to confusion of files names among different
users. The solu on to this problem is to create a separate directory for each user.
In the two-level directory structure, each user has their own user files directory (UFD). The
UFDs have similar structures, but each lists only the files of a single user. System’s master file
directory (MFD) is searched whenever a new user id is created.

3) Tree Structure/ Hierarchical Structure

Tree directory structure of opera ng system is most commonly used in our personal computers.
User can create files and subdirectories too, which was a disadvantage in the previous directory
structures.

This directory structure resembles a real tree upside down, where the root directory is at the
peak. This root contains all the directories for each user. The users can create subdirectories and
even store files in their directory.

4) Acyclic Graph Structure

As we have seen the above three directory structures, where none of them have the capability
to access one file from mul ple directories. The file or the subdirectory could be accessed
through the directory it was present in, but not from the other directory.

This problem is solved in acyclic graph directory structure, where a file in one directory can be
accessed from mul ple directories. In this way, the files could be shared in between the users. It
is designed in a way that mul ple directories point to a par cular directory or file with the help
of links.
5) General-Graph Directory Structure

Unlike the acyclic-graph directory, which avoids loops, the general-graph directory can have
cycles, meaning a directory can contain paths that loop back to the star ng point. This can make
naviga ng and managing files more complex.

4) What is File. Explain the different methods of alloca on.

ANS:

The alloca on methods define how the files are stored in the disk blocks. There are three main
disk space or file alloca on methods.

 Con guous Alloca on

 Linked Alloca on

 Indexed Alloca on

1. Con guous Alloca on

In this scheme, each file occupies a con guous set of blocks on the disk. For example, if a file
requires n blocks and is given a block b as the star ng loca on, then the blocks assigned to the
file will be: b, b+1, b+2,……b+n-1. This means that given the star ng block address and the
length of the file (in terms of blocks required), we can determine the blocks occupied by the file.
The directory entry for a file with con guous alloca on contains

 Address of star ng block

 Length of the allocated por on.

The file ‘mail’ in the following figure starts from the block 19 with length = 6 blocks. Therefore,
it occupies 19, 20, 21, 22, 23, 24 blocks.

Advantages:

 Both the Sequen al and Direct Accesses are supported by this. For direct access, the
address of the kth block of the file which starts at block b can easily be obtained as
(b+k).

 This is extremely fast since the number of seeks are minimal because of con guous
alloca on of file blocks.

Disadvantages:

 This method suffers from both internal and external fragmenta on. This makes it
inefficient in terms of memory u liza on.

 Increasing file size is difficult because it depends on the availability of con guous
memory at a par cular instance.

2. Linked List Alloca on

In this scheme, each file is a linked list of disk blocks which need not be con guous. The disk
blocks can be sca ered anywhere on the disk.
The directory entry contains a pointer to the star ng and the ending file block. Each block
contains a pointer to the next block occupied by the file.
The file ‘jeep’ in following image shows how the blocks are randomly distributed. The last block
(25) contains -1 indica ng a null pointer and does not point to any other block.

Advantages:

 This is very flexible in terms of file size. File size can be increased easily since the system
does not have to look for a con guous chunk of memory.

 This method does not suffer from external fragmenta on. This makes it rela vely be er
in terms of memory u liza on.

Disadvantages:

 Because the file blocks are distributed randomly on the disk, a large number of seeks are
needed to access every block individually. This makes linked alloca on slower.

 It does not support random or direct access. We can not directly access the blocks of a
file. A block k of a file can be accessed by traversing k blocks sequen ally (sequen al
access ) from the star ng block of the file via block pointers.

 Pointers required in the linked alloca on incur some extra overhead.

3. Indexed Alloca on

In this scheme, a special block known as the Index block contains the pointers to all the blocks
occupied by a file. Each file has its own index block. The ith entry in the index block contains the
disk address of the ith file block. The directory entry contains the address of the index block as
shown in the image:
Advantages:

 This supports direct access to the blocks occupied by the file and therefore provides fast
access to the file blocks.

 It overcomes the problem of external fragmenta on.

Disadvantages:

 The pointer overhead for indexed alloca on is greater than linked alloca on.

 For very small files, say files that expand only 2-3 blocks, the indexed alloca on would
keep one en re block (index block) for the pointers which is inefficient in terms of
memory u liza on. However, in linked alloca on we lose the space of only 1 pointer per
block.

5) Explain in detail about various file opera ons in a file system

ANS:

File Opera ons in a File System

A file system allows users and applica ons to perform various opera ons on files. These
opera ons can be categorized into basic and advanced func onali es. Below is an overview of
the key file opera ons:

1. File Crea on

 Process:

o Allocate space in the file system.


o Create an entry in the directory structure to store metadata (e.g., file name, type,
permissions).

 Example: Crea ng a new text file using touch (Linux) or New File (GUI).

2. File Opening

 Process:

o Locate the file in the directory structure.

o Load file metadata into memory.

o Return a file handle or descriptor for further opera ons.

 Example: Opening a document using open() in Python.

3. File Reading

 Process:

o Read data from the file into memory.

o Use the file pointer to track the posi on.

 Modes: Sequen al or random access.

 Example: Reading a file using fread() or read() system calls.

4. File Wri ng

 Process:

o Write data from memory to the file.

o Update the file pointer a er wri ng.

o Allocate more disk space if needed.

 Modes: Overwrite or append.

 Example: Wri ng to a file using fwrite() or write() system calls.

5. File Closing

 Process:

o Flush any unwri en data to the disk.

o Release the file handle or descriptor.


 Example: Using fclose() in C or close() in Python.

6. File Dele on

 Process:

o Remove the file entry from the directory structure.

o Free the allocated disk space.

 Example: Using rm (Linux) or Delete (GUI).

7. File Trunca on

 Process:

o Reduce or reset the size of the file to zero.

o Keeps the file metadata intact.

 Example: Using truncate() system call.

8. File Renaming

 Process:

o Change the file name in the directory structure.

o Keeps the file content and loca on intact.

 Example: Using rename() system call or mv command.

9. File Appending

 Process:

o Add data to the end of an exis ng file.

o Does not overwrite the exis ng data.

 Example: Using append mode (a or a+ in Python).

10. File Searching

 Process:

o Locate a file based on its name, a ributes, or metadata.


 Example: Using find (Linux) or Search in a GUI.

11. File Copying

 Process:

o Duplicate a file's content to a new loca on.

 Example: Using cp (Linux) or Copy-Paste in GUI.

12. File Metadata Opera ons

 Process:

o Retrieve or modify metadata like size, permissions, mestamps.

 Example: Using stat() to retrieve informa on or chmod to change permissions.

6) Explain various access methods of files.

ANS:

Access Methods of Files

File access methods determine how data in a file can be read, wri en, or manipulated. Different
applica ons require different access pa erns, and file systems support these pa erns through
various access methods. Below are the primary access methods:

1. Sequen al Access

 Defini on: Data is accessed in a sequen al order, star ng from the beginning of the file
and proceeding in order.

 Key Features:

o Simplest and most common method.

o Data is read or wri en sequen ally.

 Opera ons:

o read_next(): Reads the next block of data.

o write_next(): Writes data at the current posi on and advances the pointer.

 Examples:

o Reading a text file line by line.


o Processing log files.

 Advantages:

o Simple and efficient for sequen ally arranged data.

 Disadvantages:

o Inefficient for random data access.

2. Direct (Random) Access

 Defini on: Allows data to be accessed at any loca on in the file directly, without
following a sequence.

 Key Features:

o The file is viewed as a collec on of fixed-size blocks or records.

o Access is based on the record or block number.

 Opera ons:

o read(n): Reads the nth record directly.

o write(n): Writes data to the nth record directly.

o seek(n): Moves the file pointer to the nth record.

 Examples:

o Accessing database records by ID.

o Media playback where users jump to specific mestamps.

 Advantages:

o Faster access for applica ons requiring non-sequen al access.

 Disadvantages:

o Complex to implement.

o May lead to fragmented storage.

3. Indexed Access

 Defini on: Uses an index to keep track of file records, enabling both sequen al and
random access.

 Key Features:

o An index table is maintained, mapping keys to file loca ons.


o Searching the index allows efficient retrieval of records.

 Opera ons:

o search_index(key): Finds the index entry for a specific key.

o read_by_index(key): Reads the corresponding record.

 Examples:

o Accessing a specific page in a document using a table of contents.

o Using database indexes for efficient query execu on.

 Advantages:

o Combines benefits of sequen al and direct access.

o Efficient for large files with frequent lookups.

 Disadvantages:

o Overhead of maintaining the index.

o Index corrup on can affect access.

4. Hierarchical Access

 Defini on: Data is accessed in a hierarchical manner, o en through a tree-like structure.

 Key Features:

o Commonly used in directories or mul -level file systems.

o Access involves naviga ng through hierarchical paths.

 Examples:

o Accessing files in a nested directory structure.

o XML and JSON data parsing.

 Advantages:

o Facilitates organized and logical access to complex data.

 Disadvantages:

o Naviga on overhead for deeply nested structures.

5. Content-Based Access

 Defini on: Data is accessed based on its content rather than its loca on or index.
 Key Features:

o Relies on searching the file for specific pa erns or keywords.

 Examples:

o Searching for a word in a document.

o Querying a database for rows matching specific criteria.

 Advantages:

o Flexible and intui ve for users.

 Disadvantages:

o Computa onally expensive for large datasets.

7) Explain the access matrix method of system protec on with the domain as objects and
its implementa on.

ANS:

8) Explain various disk scheduling algorithm

9) Problems

You might also like