Question Bank
10211CS103 / Operating Systems
Winter 2024-25
Unit – II
Process Synchronization: The Critical-Section Problem - Semaphores – Mutex Locks-
Classic Problems of Synchronization – Monitors. Case Study: Windows Threads and
Linux Threads
Part-A
1. Name some classic problem of synchronization.
The Bounded – Buffer Problem
The Reader – Writer Problem
The Dining –Philosophers Problem
2. Define entry section and exit section.
The critical section problem is to design a protocol that the processes can use to cooperate.
Each process must request permission to enter its critical section. The section of the code
implementing this request is the entry section. The critical section is followed by an exit
section. The remaining code is the remainder section.
3. List out the benefits and challenges of thread handling.
• Improved throughput.
• Simultaneous and fully symmetric use of multiple processors for computation and
I/O.
• Superior application responsiveness.
• Improved server responsiveness.
• Minimized system resource usage.
• Program structure simplification.
• Better communication.
4. Define context switching.
Switching the CPU to another process requires saving the state of the old process
and loading the saved state for the new process. This task is known as context switch.
5. Define monitors.
A high level synchronization construct. A monitor type is an ADT which presents set
of programmer define operations that are provided mutual exclusion within themonitor.
6. Differentiate a Thread form a Process.
Threads
• Will by default share memory
• Will share file descriptors
• Will share file system context
• Will share signal handling
Processes
• Will by default not share memory
• Most file descriptors not shared
• Don't share file system context
• Don't share signal handling
7. Define mutual exclusion.
Mutual exclusion refers to the requirement of ensuring that no two process or threads are in
their critical section at the same time. i.e. If process Pi is executing in its critical section, then
no other processes can be executing in their critical sections.
8. Define semaphore. Mention its importance in operating system.
A semaphore 'S' is a synchronization tool which is an integer value that, apart from
initialization, is accessed only through two standard atomic operations; wait and signal.
Semaphores can be used to deal with the n-process critical section problem. It can be also used
to solve various Synchronization problems.
9. How the mutual exclusion may be violated if the signal and wait operations are
not executed automatically?
A wait operation atomically decrements the value associated with a semaphore. If two wait
operations are executed on a semaphore when its value is1, if the two operations are not
performed atomically, then it is possible that both operations might proceed to decrement the
semaphore value, thereby violating mutual exclusion.
Part-B
1. Explain the role of a counting semaphore in resource management.
2. Discuss the features of Monitor.
A monitor is a synchronization mechanism in operating systems that manages access to shared
resources. The key features are:
o Encapsulation of Shared Resources
o Automatic Mutual Exclusion
o Condition Variables
3. Explain how bounded waiting prevents starvation of processes
4. Explain the process synchronization with (i) Producer-Consumer problem (ii) Reader-
Writer problem.
5. Illustrate the three requirements for critical section problem and provide the solutions
for the critical section problem.
6. Illustrate in detail about semaphore and its usage for solving synchronization problem.
7. Explain in detail about monitors.
8. Explain in detail about semaphores, their usage, implementation given to avoid busy
waiting and binary semaphores.
9. Define race condition. Explain how a race condition can be avoided in critical section
problem. Formulate a solution to the dining philosopher problem so that no race condition
arises.
Unit – III
Unit Contents: CPU Scheduling and Deadlock Management
CPU Scheduling: Scheduling Criteria - Scheduling Algorithms. Deadlocks: Deadlock
Characterization - Methods for Handling Deadlocks - Deadlock Prevention - Deadlock
Avoidance - Deadlock Detection - Recovery from Deadlock. Case Study: Real Time
CPU scheduling
Part-A
What is CPU Scheduling?
CPU scheduling is the process of determining which process in the ready queue is to be allocated
the CPU for execution.
Name two types of CPU scheduling.
Preemptive scheduling and Non-preemptive scheduling.
What is the difference between Preemptive and Non-Preemptive scheduling?
Preemptive Scheduling Non-Preemptive Scheduling
CPU is allocated to the processes CPU is allocated to the process
for a specific time. until it gets terminated.
Process can be interrupted while its Process cannot be interrupted while
under execution. its under execution.
Waiting and Response time is less Waiting and Response time is high.
Example: Round Robin FCFS
List any four CPU scheduling algorithms.
First-Come First-Served (FCFS) Scheduling Algorithm
Shortest Job First (SJF) Scheduling Algorithm
Priority Scheduling Algorithm
Round Robin (RR) Scheduling Algorithm
What are the disadvantages of FCFS scheduling algorithm?
It suffers from the "convoy effect," where short processes get stuck waiting for long processes to
complete.
Define waiting time and turnaround time in CPU scheduling.
Waiting Time is the time a process spends waiting in a queue to get the CPU, while Turn Around
Time is the total time it takes a process to complete from when it's submitted.
What is the objective of the Shortest Remaining Time First (SRTF) scheduling algorithm?
To minimize average waiting time by selecting the process with the shortest remaining time for
execution.
List out the criteria for CPU Scheduling.
CPU utilization, Throughput, waiting time, Turn Around Time and Response time.
Define deadlock in an operating system.
A deadlock occurs when a set of processes is in a state where each process is waiting for a resource
held by another process, leading to indefinite blocking.
List the four necessary conditions for deadlock.
Mutual Exclusion, Hold and Wait, No Preemption, and Circular Wait.
What is the difference between deadlock prevention and deadlock avoidance?
Deadlock prevention ensures at least one of the necessary conditions for deadlock is never satisfied,
while deadlock avoidance dynamically checks the resource-allocation state to avoid unsafe states.
What is a safe state in deadlock management?
A state is safe if the system can allocate resources to all processes in some order without leading to a
deadlock.
Name two methods of handling deadlocks.
Deadlock prevention and deadlock detection.
What is the purpose of the Banker's Algorithm?
The Banker's Algorithm is used to avoid deadlocks by checking resource allocation requests against
system safety.
What is deadlock recovery?
Deadlock recovery is the process of regaining normal system operation after a deadlock, often by
terminating processes or preempting resources.
Part-B
3-marks
1. Outline the two types of programs: a. I/O-bound and b. CPU-bound which is more likely to
have voluntary context switches, and which is more likely to have nonvoluntary context switches?
Explain your answer.
Ans:
a. I/O-bound programs:
b. CPU-bound programs:
2. Interpret on a system implementing multilevel queue scheduling. What strategy can a computer
user employ to maximize the amount of CPU time allocated to the user’s process?
3. Summarize among the scheduling algorithms that could result in starvation?
a. First-come, first-served
b. Shortest Job first
c. Round Robin
d. Priority
4. Explain the how the following scheduling algorithms discriminate either in favour of or against
short processes:
a. FCFS
b. RR
c. Multilevel feedback queues
5.Explain about Real-Time Scheduling.
Ans:
a. Types:
i. Hard Real-Time: Missing a deadline is catastrophic (e.g., medical devices, avionics).
ii. Soft Real-Time: Missing a deadline is undesirable but not critical (e.g., video streaming).
b. Key Features:
- Priority-based scheduling, where higher priority is given to tasks with stricter deadlines.
- Algorithms include Rate-Monotonic Scheduling (RMS) for static priorities and Earliest Deadline First
(EDF) for dynamic priorities.
c. Challenges:
- Ensuring predictability and meeting deadlines under varying workloads.
- Balancing resource utilization while avoiding task starvation or overload.
6. Can a system detect that some of its threads are starving? If you answer “yes,” Explain how it
can. If you answer “no,” explain how the system can deal with the starvation problem.
Yes, a system can detect thread starvation.
7. Differentiate between starvation and deadlock.
8. Discuss the ways to recover from deadlock?
9. Summarize Resource Trajectories can be helpful in avoiding the deadlock?
10. Discuss RAG with respect to deadlock?
5-marks
1. Explain with the concept of multilevel queue scheduling.
Multilevel queue scheduling divides the ready queue into multiple queues based on process
priority or type (e.g., system processes, interactive processes). Each queue can have its own
scheduling algorithm. Processes do not move between queues.
Example:
a. Queue 1: System processes (FCFS)
b. Queue 2: Interactive processes (RR)
c. Queue 3: Batch processes (SJF)
2. Consider following processes with length of CPU burst time in milliseconds.
Process Burst time Priority
P1 10 3
P2 1 1
P3 2 3
P4 1 4
P5 5 2
a. All processes arrived in order P1, P2, P3, P4 and P5 at time zero.
b. 1) Draw Gant charts illustrating execution of these processes for SJF, Non-
Preemptive priority (smaller priority number implies a higher priority) & Round
Robin (Quantum=1).
c. 2) Calculate turnaround time for each process for scheduling algorithms
mentioned in part (1)
d. 3) Calculate waiting time for each scheduling algorithms listed in part (1)
3. What are the various criteria for a good process scheduling algorithm? Explain any two
Preemptive scheduling algorithms in brief.
4. Consider following processes with length of CPU burst time in milliseconds.
Process Arrival Time Burst Time
P1 3 1
P2 1 4
P3 4 2
P4 0 6
P5 2 3
a. Draw Gant charts illustrating execution of these processes for FCFS, SJF Non-
Preemptive & Round Robin (Quantum=2 ms).
b. Calculate turnaround time for each process for scheduling algorithms mentioned
in part (1)
c. Calculate waiting time for each scheduling algorithms listed in part (1)
5. Infer the following set of processes, with the length of the CPU burst and arrival time given in
milliseconds.
Process Burst time (B.T) Arrival time(A.T)
P1 8 0.00
P2 4 1.001
P3 9 2.001
P4 5 3.001
P5 3 4.001
a. Draw four Gantt charts that illustrate the execution of these processes using the
following scheduling algorithms: FCFS, SJF and RR (quantum=2ms) scheduling.
Calculate waiting time and turnaround time of each process for each of the scheduling
algorithms and find the average waiting time and average turnaround time.
6. Explain the Banker's Algorithm implementations with an example.
7. Consider the following snapshot of a system:
Tasks Allocation Max Available
A B C D A B C D A B C D
T0 0 0 1 2 0 0 1 2 1 5 2 0
T1 1 0 0 0 1 7 5 0
T2 1 3 5 4 2 3 5 6
T3 0 6 3 2 0 6 5 2
T4 0 0 1 4 0 6 5 6
Explain the following questions using the banker’s algorithm:
a. What is the content of the matrix Need?
b. Is the system in a safe state?
c. If a request from thread T1 arrives for (0,4,2,0), can the request be granted
immediately?
8. Suppose that a system is in an unsafe state. Illustrate that it is possible for the threads to
complete their execution without entering a deadlocked state.
9. Explain deadlock avoidance and Bankers algorithm in detail.
Consider the following system snapshot using data structures in the banker’s algorithm, with
resources A, B, C and D and process P0 to P4.
Max Allocation Need Available
ABCD ABCD ABCD ABCD
P0 6 0 1 2 4 0 0 1 3 2 1 1
P1 1 7 5 0 1 1 0 0
P2 2 3 5 6 1 2 5 4
P3 1 6 5 3 0 6 3 3
P4 1 6 5 6 0 2 1 2
Using banker’s algorithm, Answer the following questions:
i) Explain How many resources of type A, B, C and D are
available in total?
ii) What are the contents of the need matrix?
iii) Is the system in a safe state? If yes, what is the sequence of
process execution?
Unit – IV
Unit Contents: Memory Management (First Half)
Main Memory: Swapping - Contiguous Memory Allocation, Segmentation, Paging - Structure
of the Page Table
Part-A
1. Define Swapping. What is its purpose?
2. What are the advantage and disadvantage of Contiguous and Non-Contiguous memory
allocation?
3. What is Address Binding? How to binding of Instructions and data to Memory?
4. Define memory management Unit (MMU).
5. Differentiate the Fixed and Variable Partitioning.
6. Differentiate the Internal and External Fragmentation
7. Why is a valid/invalid bit used in a page table entry?
8. What is the role of a Translation Lookaside Buffer (TLB) in paging?
9. What is the role of the base and limit registers in segmentation?
10. How does segmentation help in modular program design?
11. What is the purpose of a page table in paging?
12. Explain the difference between a page and a frame
13. What information is stored in a page table entry (PTE)?
14. Why is a valid/invalid bit used in a page table entry?
Part-B
3-marks
1. Differentiate Physical addressing and logical addressing.
2. Discuss the following.
a. Segment Table
b. Segment Table Base Register
c. Segment Table Limit Register
3. Illustrate segmentation hardware with appropriate diagram
4. Explain the concept of paging in memory management and its main advantage.
5. Summarize the key components of a page table, and what is their purpose?
6. Discuss a multilevel page table, and why is it used?
5-marks
1. A system has a 32-bit logical address space and a page size of 4 KB. Calculate the total
size of the page table if each page table entry (PTE) requires 4 bytes.
Given:
1. Logical address space =232 bytes
2. Page size = 4 KB=212 bytes
3. Each Page Table Entry (PTE) = 4 bytes
Sol:
Step 1: Calculate the number of pages in the logical address space.
The total number of pages is given by:
Number of pages= Logical address space /Page size=232/212 =220 pages
Step 2: Calculate the size of the page table.
Each page requires one page table entry (PTE), and each PTE is 4 bytes. The total size of the page table
is:
Page table size=Number of pages×Size of each PTE.
Page table size=220×4 bytes=4 MB
Final Answer:
The total size of the page table is 4 MB
2. Explain how segmentation works in memory management, including the role of the
segment table and the base and limit registers?
Consider a system where both paging and segmentation are used. The system uses a segment table
for logical division and a page table for each segment. If segment 1 has 3 pages and segment 2 has
2 pages, how will the system manage memory access for a logical address in segment 1 with a page
number of 2? Explain it.
Sol.:
a. The segment table maps the segment to a starting physical frame.
b. For segment 1, the page table will contain entries for the 3 pages. For a logical address
in segment 1 with a page number of 2, the corresponding physical frame will be found
in the page table of segment 1.
c. The physical address is calculated by combining the frame number from the page table
with the offset in the page.
3. Explain the concept and techniques of Contiguous Memory Allocation.
4. Given five memory partitions of 100Kb, 500Kb, 200Kb, 300Kb, 600Kb (in order), how
would the first-fit, best-fit, and worst-fit algorithms place processes of 212 Kb, 417 Kb,
112 Kb, and 426 Kb (in order)? Which algorithm makes the most efficient use of memory?
Explain it.
5. Define Fragmentation. Explain its types.
---- All the Best----