0% found this document useful (0 votes)
0 views10 pages

QB Module II New

The document contains a question and answer bank for CST 206 Operating System, focusing on various topics such as scheduling algorithms, inter-process communication (IPC), process states, and context switching. It includes questions from past exams, providing definitions, explanations, and examples related to operating system concepts. The document also discusses the significance of different scheduling methods and IPC mechanisms like shared memory and message passing.

Uploaded by

theerthasjoy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
0 views10 pages

QB Module II New

The document contains a question and answer bank for CST 206 Operating System, focusing on various topics such as scheduling algorithms, inter-process communication (IPC), process states, and context switching. It includes questions from past exams, providing definitions, explanations, and examples related to operating system concepts. The document also discusses the significance of different scheduling methods and IPC mechanisms like shared memory and message passing.

Uploaded by

theerthasjoy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

QUESTION AND ANSWER BANK

CST 206 OPERATING SYSTEM


MODULE II
PART A (3 MARKS)
1. Differentiate pre-emptive and non-pre-emptive scheduling giving application of each of
them. [July 2021]
Non-preemptive: Only schedule a new process when the current one does not want CPU any
more. When a process is being executed by CPU it cannot be moved out until it completes its
execution.
Its application comes when we are executing the process in a first come first serve basis.
Preemptive: Schedule a new process even when the current process does not intend to give
up the CPU. Here the process can be moved out before it completes its execution.
There may be a need to pre-empt the process when using a priority or shortest job first
algorithm.

2. Why is context switching considered to be an overhead to the system. [July 2021]

While switching between each process we must save the PCB details of process being taken
out and similarly when we must load a new process the PCB details of that process have to be
loaded. These activities consume time and so is considered as overhead while switching
between processes.
3. How many times will forked get printed in the below program. Justify your answer. [June
2022]
Int main ()
{
Fork ();
Fork ();
Printf (“forked \n”);
Return (0);
}
Forked will get printed three times. First it will be printed for the main program. Next two
times it will be printed for two fork functions.
4. List and explain various synchronous and asynchronous methods of message passing in IPC.
[June 2022]

Communication between processes takes place through calls to send () and receive ()
primitives. Message passing may be either blocking or nonblocking also known as
synchronous and asynchronous.
In synchronous mode send and receive operation takes place as follows
Blocking send - The sending process is blocked until the message is received by the receiving
process or by the mailbox.
Blocking receive - The receiver blocks until a message is available.
In asynchronous mode send and receive operation takes place as follows
Non-blocking send - The sending process sends the message and resumes operation.
Nonblocking receive - The receiver retrieves either a valid message or a null.
5. Explain the different buffering mechanism used in message passing system. [June 2023]
Whether communication is direct or indirect, messages exchanged by communicating
processes reside in a temporary queue. Such queues can be implemented in three ways:
Zero capacity. The queue has a maximum length of zero; thus, the link cannot have any
messages waiting in it. In this case, the sender must block until the recipient receives the
message.
Bounded capacity. The queue has finite length n; thus, at most n messages can reside in it. If
the queue is not full when a new message is sent, the message is placed in the queue (either
the message is copied or a pointer to the message is kept), and the sender can continue
execution without waiting. The link’s capacity is finite, however. If the link is full, the sender
must block until space is available in the queue.
Unbounded capacity. The queue’s length is potentially infinite; thus, any number of messages
can wait in it. The sender never blocks.
PART B

6. Explain how long-term scheduler directly affects system performance. [July 2021] [5 MARKS]
Long term scheduler selects jobs from job pool and assigns them to ready queue for execution
by CPU. A long-term scheduler’s primary function is to minimize processing time by taking the
mixtures of CPU-bound jobs and I/O-bound jobs.
CPU Bound Jobs: CPU-bound jobs are tasks or processes that necessitate a significant amount
of CPU processing time and resources (Central Processing Unit). These jobs can put a
significant strain on the CPU, affecting system performance and responsiveness.
I/O Bound Jobs: I/O bound jobs are tasks or processes that necessitate many input/output
(I/O) operations, such as reading and writing to discs or networks. These jobs are less
dependent on the CPU and can put a greater strain on the system’s I/O subsystem.

So, the long -term scheduler picks a correct mixture of CPU bound and I/O bound jobs to
maintain a good system performance.
7. A writer process needs to send a bulk of information to reader process. Explain the IPC
mechanism that can be used for this purpose. [July 2021] [8 MARKS]

For sending a bulk of information shared memory concept of IPC can be used. Inter process
communication using shared memory requires communicating processes to establish a region
of shared memory. A shared-memory region resides in the address space of the process
creating the shared-memory segment. Other processes that wish to communicate using this
shared-memory segment must attach it to their address space. They can then exchange
information by reading and writing data in the shared areas. The processes are also
responsible for ensuring that they are not writing to the same location simultaneously.

8. How many child process will be created for following code. [July 2021] [6 MARKS]
Void main ()
{
Fork ();
Fork ();
Printf (“HELLO\n”);
Fork ();
Printf (“WELCOME\n”);
}
How many times HELLO and WELCOME will be printed?
Seven child process will be created for the above code. Hello will be printed 4 times and
Welcome will be printed 8 times.
9. Five batch process A to E arrive in the system in the order A to E at the same time. They have
running times of 6, 4, 1, 3, 7 and have priorities 3, 5, 2, 1, 4 with 5 being highest priority.
Calculate the average waiting time using [July 2021] [12 MARKS]

a. RR Time quantum = 2
b. FCFS
c. SJF
d. Priority
FCFS

Process Run time Waiting time


A 6 0
B 4 6
C 1 10
D 3 11
E 7 14

Average waiting time = 0+6+10+11+14/5 = 41/5 = 8.2


SJF
Process Run time Waiting time
A 6 8
B 4 4
C 1 0
D 3 1
E 7 14

Average waiting time = 8+4+0+1+14/5 = 27/5 = 5.4


PRIORITY

Process Run time Priority Waiting time


A 6 3 11
B 4 5 0
C 1 2 17
D 3 1 18
E 7 4 4

Average waiting time = 11+0+17+18+4/5 = 10


RR Scheduling

Process Run time Waiting time


A 6 12
B 4 9
C 1 4
D 3 11
E 7 14

Average waiting time = 12+9+4+11+14/5 = 10


10. Point out the significance of zero capacity queue in IPC. [July 2021] [2 MARKS]

The queue has a maximum length of zero; thus, the link cannot have any messages waiting in
it. In this case, the sender must block until the recipient receives the message.
11. What is meant by context switching? Illustrate the context switching with a diagram. [June
2022] [6 MARKS]

Switching between various processes is called as context switching. Whenever a process must
service an interrupt or I/O it is removed from execution and another process is selected for
execution. When a process is removed its status must be saved so that it can resume the
operation from the point it left. The PCB of process is saved and PCB of another one is loaded
that is ready for execution.
12. Differentiate between the following [June 2022] [8 MARKS]

a. Long term scheduler and short term scheduler


b. Pre-emptive and non pre-emptive scheduling
Long term scheduler
Long term scheduler selects jobs from job pool and assigns them to ready queue for execution
by CPU. A long-term scheduler’s primary function is to minimize processing time by taking the
mixtures of CPU-bound jobs and I/O-bound jobs.
Short term scheduler
Selects from the processes that are ready to execute & allocate CPU to one of them. Various
scheduling algorithms such as FCFS, SJF, Priority and Round robin are used to select one of the
processes in ready queue.
Non-preemptive: Only schedule a new process when the current one does not want CPU any
more. When a process is being executed by CPU it cannot be moved out until it completes its
execution.
Its application comes when we are executing the process in a first come first serve basis.
Preemptive: Schedule a new process even when the current process does not intend to give
up the CPU. Here the process can be moved out before it completes its execution.
There may be a need to pre-empt the process when using a priority or shortest job first
algorithm.
13. Explain different states of a process and transition between them with help of a diagram.
[June 2022] [7 MARKS]

As a process executes, it changes state


a. new: Process is being created.
b. running: Instructions are being executed.
c. waiting: Process is waiting for some event to occur (I/O completion or interrupt).
d. ready: Process is waiting to be assigned to a processor.
e. terminated: Process has finished execution.
14. What is a process? Explain different states of a process.
Process is a program in execution. It is an active entity. As a process executes, it changes state
a. new: Process is being created.
b. running: Instructions are being executed.
c. waiting: Process is waiting for some event to occur (I/O completion or interrupt).
d. ready: Process is waiting to be assigned to a processor.
e. terminated: Process has finished execution.

15. With an example, illustrate the inter process communication using Shared memory. [June
2022] [7 MARKS]

Communication between processes using shared memory requires processes to share


some variable, and it completely depends on how the programmer will implement it.
One way of communication using shared memory can be imagined like this: Suppose
process1 and process2 are executing simultaneously, and they share some resources or
use some information from another process. Process1 generates information about
certain computations or resources being used and keeps it as a record in shared memory.
When process2 needs to use the shared information, it will check in the record stored in
shared memory and take note of the information generated by process1 and act
accordingly. Processes can use shared memory for extracting information as a record
from anotherprocess as well as for delivering any specific information to other processes.
Example: Producer-Consumer problem

i. There are two processes: Producer and Consumer.


ii. The producer produces some items and the Consumer consumes that item.
iii. The two processes share a common space or memory location known
as a buffer where the item produced by the Producer is stored and
from which the consumer consumes the item if needed.

16. What are threads? What are the benefits of multithreaded programming? List the
ways of establishing relationship between user threads and kernel threads. [June
2022] [6 MARKS]

Threads are mechanisms that permit an application to perform multiple tasks


concurrently. Thread sometimes called a lightweight process is a basic unit of CPU
utilization.
Thread comprises of
a. Thread ID
b. Program counter
c. Register set
d. Stack
Benefits
 Responsiveness
Interactive application can delegate background functions to a thread and keep
running.
 Resource Sharing
Several different threads can access the same address space
 Economy
Allocating memory and resources for new processes is costly. Threads are much
‘cheaper’ to initiate.
 Scalability
Use threads to take advantage of multiprocessor architecture

Establishing relation of user threads and kernel threads

 Many-to-One
Many user-level threads mapped to single kernel thread

 One-to-One
Each user-level thread maps to kernel thread
 Examples
 Windows NT/XP/2000
 Linux

 Many-to-Many
Allows many user level threads to be mapped to many kernel threads
Allows the operating system to create enough kernel threads
 Example
Windows NT/2000

17. Assume you have the following jobs in a system that to be executed with a
single processor. Now,

Process Arrival time Burst time


P0 0 75
P1 10 40
P2 10 25
P3 55 30
P4 95 45
[June 2022] [8 MARKS]

i) Create a Gantt chart illustrating the execution


ii) Find the average waiting time
Iii) Find the average turnaround time
For the above processes, when the system uses
a) Pre-emptive Scheduling b) RR Scheduling (Time Quantum = 15 ms)

You might also like