simp ans
simp ans
Module - 05
1. Explain different file access methods and allocation methods (contiguous, linked,
indexed).
Directory Structures:
● Tree-Structured Directories: A hierarchical directory structure with a root directory and
subdirectories forming a tree. Simple to implement but doesn't allow sharing of files or
directories.
● Acyclic Graph Directories: Allows sharing of files and directories by allowing a directory to
have multiple parent directories. More complex to implement but provides greater flexibility.
3. Explain different disk scheduling algorithms (FCFS, SSTF, SCAN, C-SCAN, LOOK)
and calculate the total head movement for a given set of disk requests.
Disk Scheduling Algorithms: These algorithms determine the order in which disk requests are
serviced to minimize disk head movement and improve performance.
● FCFS (First-Come, First-Served): Requests are serviced in the order they arrive. Simple but
can lead to long head movements.
● SSTF (Shortest Seek Time First): The request with the shortest seek time from the current
head position is serviced next. Reduces head movement but can lead to starvation.
● SCAN (Elevator Algorithm): The disk head moves in one direction, servicing requests along
the way. When it reaches the end of the disk, it reverses direction.
● C-SCAN (Circular SCAN): Similar to SCAN, but when the head reaches the end of the disk,
it jumps back to the beginning without servicing any requests on the return trip.
● LOOK: Similar to SCAN, but the head only moves as far as the last request in each direction
before reversing.
4. With a neat diagram, describe tree-structured directories and acyclic graph directories.
Disk requests: 98, 183, 37, 122, 14, 124, 65, 67 Starting head position: 53
● FCFS: 53->98->183->37->122->14->124->65->67. Total head movement:
(45+85+146+85+108+110+59+2) = 640
● SSTF: 53->37->65->67->98->122->124->183->14. Total head movement:
(16+28+2+31+24+2+59+169) = 331
● SCAN: (Assuming head moves towards higher cylinder numbers first)
53->65->67->98->122->124->183->199(end)->0->14->37. Total head movement:
(12+2+31+24+2+59+16+199+14+23) = 382
● C-SCAN: 53->65->67->98->122->124->183->199->0->14->37. Total head movement:
(12+2+31+24+2+59+16+199+14+23) = 382
● LOOK: 53->65->67->98->122->124->183->14->37. Total head movement:
(12+2+31+24+2+59+169+23) = 322
Current head: 143 Previous request: 125 Pending requests: 86, 1470, 913, 1774, 948, 1509, 1022,
1750, 130
● FCFS: 143->86->1470->913->1774->948->1509->1022->1750->130. Total:
57+1384+557+861+826+561+487+728+1620=7030
● SSTF: 143->130->86->913->948->1022->1470->1509->1750->1774. Total:
13+44+827+35+74+448+39+241+24=1765
● SCAN: (Moving towards higher cylinder numbers)
143->1470->1509->1750->1774->4999->0->86->130->913->948->1022. Total:
1327+39+241+24+3225+86+44+783+35+74=5878
● LOOK: (Moving towards higher cylinder numbers)
143->1470->1509->1750->1774->913->130->86. Total:
1327+39+241+24+861+783+44=3319
7. List the common file types along with their extensions and functions.
Page Replacement Algorithms: These algorithms decide which page to replace when a page
fault occurs and no free frames are available.
● LRU (Least Recently Used): Replaces the page that has not been used for the longest time.
● FIFO (First-In, First-Out): Replaces the page that has been in memory for the longest time.
● Optimal (OPT): Replaces the page that will not be used for the longest time in the future
(ideal but not practically implementable).
2. Explain different memory allocation algorithms (First-fit, Best-fit, Worst-fit) and discuss
internal and external fragmentation.
Fragmentation:
● Internal Fragmentation: Occurs within a partition when the allocated memory is larger than
the requested memory. This happens in fixed partitioning schemes.
● External Fragmentation: Occurs when there is enough total free memory to satisfy a request,
but the free memory is not contiguous. This happens in variable partitioning schemes.
The TLB is a cache that stores recent translations from virtual to physical addresses. It speeds up
memory access by avoiding the need to access the page table in main memory for every memory
access.
6. Explain (i) inverted page tables with a diagram (ii) Thrashing (iii) Demand paging.
● (i) Inverted Page Tables: Instead of having a page table for each process, there is one page
table for the entire physical memory. Each entry in the table corresponds to a physical frame
and stores the virtual address of the page residing in that frame. (Diagram: Show the inverted
page table with entries indexed by frame number and containing process ID and page number.)
● (ii) Thrashing: Occurs when a process does not have enough frames allocated to it, leading to
excessive page faults. The process spends most of its time swapping pages in and out of
memory, resulting in very low CPU utilization.
● (iii) Demand Paging: Loads pages into memory only when they are needed (on demand).
This reduces memory usage and I/O overhead.
(Diagram: Show the CPU, MMU, TLB, page table, and secondary storage.)
1. The CPU generates a logical address.
2. The MMU checks the TLB for the corresponding physical address.
3. If there's a TLB miss, the MMU accesses the page table in main memory.
4. If the page is not in memory (page fault), the MMU traps to the operating system.
5. The OS finds a free frame or uses a page replacement algorithm to select a victim frame.
6. The OS reads the required page from secondary storage into the frame.
7. The page table and TLB are updated.
8. The faulting instruction is restarted.
8. Explain the different page replacement algorithms. (This is same as it was covered in
question 1. Just refer back to that answer).
9. Consider the following page reference string 2,3,2,1,5,2,4,5,3,2,5,2. How many page faults
would occur in case of a) LRU b) FIFO c) Optimal page replacement algorithms assuming
3 frames. Note that initially all frames are empty.
2 2 Yes
3 2 3 Yes
2 2 3 No
1 1 3 Yes
5 1 5 Yes
2 2 5 Yes
4 2 4 Yes
5 5 4 Yes
3 5 3 Yes
2 2 3 Yes
5 2 5 Yes
2 2 5 No
2 2 Yes
3 2 3 Yes
2 2 3 No
1 1 3 Yes
5 1 5 Yes
2 2 5 Yes
4 2 4 Yes
5 5 4 Yes
3 5 3 Yes
2 2 3 Yes
5 2 5 Yes
2 2 5 No
c) Optimal (OPT):
Reference Frame 1 Frame 2 Frame 3 Page Fault?
String
2 2 Yes
3 2 3 Yes
2 2 3 No
1 1 3 Yes
5 1 5 Yes
2 1 5 2 Yes
4 4 5 2 Yes
5 4 5 2 No
3 4 3 2 Yes
2 4 3 2
Module - 03
1. Explain semaphores and their usage in solving the dining philosophers and bounded
buffer problems.
A semaphore is an integer variable that, apart from initialization, is accessed only through two
atomic operations:
● wait() (or P()): Decrements the semaphore value. If the value becomes negative, the process
executing wait() is blocked until the value becomes non-negative.
● signal() (or V()): Increments the semaphore value. If there are processes blocked on the
semaphore, one of them is unblocked.
Five philosophers are sitting at a circular table with a bowl of spaghetti in front of each and a
single fork between each pair of philosophers. To eat, a philosopher needs two forks – the one on
their left and the one on their right. The problem is to design a protocol that allows the
philosophers to eat without deadlock or starvation.
Semaphore Solution:
This solution can lead to deadlock if all philosophers pick up their left forks simultaneously. A
better solution is to allow only four philosophers to be at the table at the same time, or to use an
asymmetry in picking up forks (e.g., philosopher 0 picks up the right fork first).
A producer process generates items and places them in a buffer. A consumer process takes items
from the buffer. The buffer has a fixed size. The problem is to synchronize the producer and
consumer so that the producer doesn't try to add to a full buffer and the consumer doesn't try to
take from an empty buffer.
Semaphore Solution:
2. What are deadlocks? Explain the necessary conditions for deadlock occurrence. Discuss
deadlock prevention methods, deadlock avoidance (Banker's algorithm), and deadlock
detection and recovery.
A deadlock is a situation where two or more processes are blocked indefinitely, waiting for each
other to release resources.
Deadlock Prevention: Prevent one or more of the Coffman conditions from occurring.
Examples:
● Eliminate Mutual Exclusion: Not always possible (e.g., printers).
● Eliminate Hold and Wait: Require processes to request all their resources at once.
● Allow Preemption: Allow the OS to take resources away from a process.
● Eliminate Circular Wait: Impose a total ordering on resources and require processes to
request resources in increasing order.
Deadlock Avoidance (Banker's Algorithm): The OS has knowledge of the maximum resource
requirements of each process. It uses this information to make decisions about resource
allocation that avoid entering a deadlock state.
Deadlock Detection and Recovery: Allow deadlocks to occur and then detect and recover from
them.
● Detection: Use algorithms to detect cycles in the resource allocation graph.
● Recovery:
○ Process Termination: Abort one or more deadlocked processes.
○ Resource Preemption: Take resources away from one or more deadlocked processes.
A race condition occurs when multiple processes or threads access and manipulate shared data
concurrently, and the final outcome depends on2 the particular order in which the accesses take
place. This can lead to unpredictable and incorrect results.
4. Define the term critical section. What are the requirements for critical section problems?
The readers/writers problem involves multiple processes that want to access a shared resource
(e.g., a file). Some processes are readers (only read the data), and some are writers (modify the
data). Multiple readers can access the resource concurrently, but only one writer can access it at a
time, and no readers can be present while a writer is writing.
Semaphore Solution:
Peterson's algorithm is a classic software-based solution to the critical section problem for two
processes.
int flag[2]; // flag[i] = true means process i wants to enter the critical section
int turn; // Indicates whose turn it is to enter the critical section
void process_i(int i) { // i is either 0 or 1
int j = 1 - i; // j is the other process
flag[i] = true;
turn = j;
while (flag[j] && turn == j); // Busy wait
// Critical section
flag[i] = false;
// Remainder section
}
A resource allocation graph is a directed graph used to model resource allocation in a system. It
consists of:
● Vertices: Represent processes (circles) and resources (rectangles).
● Edges:
○ Request edge: Directed edge from a process to a resource, indicating that the process is
requesting that resource.
○ Assignment edge: Directed edge from a resource to a process, indicating that the resource
has been allocated to that process.
A cycle in the resource allocation graph indicates a potential deadlock.
8. State the dining philosopher problem and give the solution for the same using monitors.
Monitor Solution:
A monitor is a high-level synchronization construct that provides mutual exclusion and condition
variables.
monitor DiningPhilosophers {
enum {THINKING, HUNGRY, EATING} state[5];
condition self[5];
void pickup(int i) {
state[i] = HUNGRY;
test(i);
if (state
Module - 02
1. What is a process? Explain the different states of a process with a state diagram. Also,
explain the structure of the Process Control Block (PCB).
A process is a program in execution. It's a dynamic entity, unlike a program, which is a static set
of instructions.
State Diagram:
The PCB is a data structure that stores information about each process. It's essential for the OS to
manage processes effectively. Key components of a PCB include:
● Process ID (PID): A unique identifier for the process.
● Process State: The current state of the process (new, ready, running, blocked, terminated).
● Program Counter (PC): The address of the next instruction to be executed.
● CPU Registers: The values of the CPU registers for the process.
● Memory Management Information: Information about the memory allocated to the process
(e.g., base and limit registers, page tables).
● I/O Status Information: Information about I/O devices allocated to the process, open files,
etc.
● Scheduling Information: Process priority, scheduling queue pointers, etc.
● Accounting Information: CPU time used, time limits, etc.
2. Explain the different CPU scheduling algorithms and calculate the average waiting time
and average turnaround time for the given set of processes.
1 0 4
2 1 8
3 2 5
4 3 9
c) SJF (Shortest Job First) - Preemptive (Shortest Remaining Time First - SRTF):
● Gantt Chart: | P1(1) | P3(1) | P1(3) | P2(7) | P3(4) | P4(9) |
● Waiting Times: P1=0+2=2, P2=6, P3=1+3=4, P4=16
● Turnaround Times: P1=4, P2=14, P3=9, P4=25
● Avg. Waiting Time: (2+6+4+16)/4 = 7
● Avg. Turnaround Time: (4+14+9+25)/4 = 13
d) Priority (Assuming lower number = higher priority. If priorities are not explicitly given,
this cannot be calculated.)
● Cannot be calculated without priority values.
3. Discuss (i) multilevel queue scheduling and multilevel feedback queue scheduling. (ii)
Non-preemptive and preemptive scheduling.
Processes are divided into different queues based on their properties (e.g., interactive, batch,
system). Each queue has its own scheduling algorithm. For example:
● System processes: Highest priority, often scheduled with Round Robin.
● Interactive processes: High priority, often scheduled with Round Robin.
● Batch processes: Lower priority, often scheduled with FCFS.
An improvement over multilevel queues. Processes can move between queues based on their
behavior. For example:
● A process that uses too much CPU time can be moved to a lower-priority queue.
● A process that waits too long in a lower-priority queue can be moved to a higher-priority
queue (aging).
Key differences are responsiveness and overhead. Preemptive scheduling has higher overhead
due to context switching but provides better responsiveness for interactive systems.
Non-preemptive is simpler but less responsive.
5. Explain the multithread model. What are the benefits of multithreaded programming?
Here's the analysis for processes P1-P5 with given burst times and priorities:
1) Gantt Charts:
● FCFS: | P1(10) | P2(1) | P3(2) | P4(1) | P5(5) |
● SJF: | P2(1) | P4(1) | P3(2) | P5(5) | P1(10) |
● Priority: | P2(1) | P5(5) | P1(10) | P3(2) | P4(1) |
● Round Robin (Quantum=1): | P1 | P2 | P3 | P4 | P5 | P1 | P3 | P5 | P1 | P5 | P1 | P1 | P1 | P1 |
2) Turnaround Times:
Algorithm P1 P2 P3 P4 P5
FCFS 10 11 13 14 19
SJF 19 1 3 2 7
Priority 16 1 18 19 6
Algorithm P1 P2 P3 P4 P5
Round 14 2 7 4 10
Robin
3) Waiting Times:
Algorithm P1 P2 P3 P4 P5
FCFS 0 10 11 13 14
SJF 9 0 1 0 2
Priority 6 0 16 18 1
Round 4 1 5 3 5
Robin
Module - 01
1. What is an operating system? Explain the different operating system services and discuss
the layered and microkernel approaches.
An operating system (OS) is system software that manages computer hardware and software
resources and provides common services for computer programs.12 It acts as an intermediary
between the user and the computer hardware.3
Layered Approach: The OS is organized into layers, with each layer providing services to the
layer above it and using services from the layer below it. This simplifies design and
implementation but can reduce efficiency due to overhead between layers.10
Microkernel Approach: The OS has a small kernel that provides only essential services (e.g.,
inter-process communication, basic memory management). Other services (e.g., file systems,
device drivers) are implemented as user-level processes. This improves modularity and
robustness but can also reduce performance due to increased inter-process communication.
3. Explain the multiprocessor system and differentiate between symmetric and asymmetric
multiprocessing.
A multiprocessor system uses multiple CPUs within a single computer system.14 This allows for
parallel processing and increased performance.
● Symmetric Multiprocessing (SMP): All CPUs are treated equally and can run any process.
They share the same memory and I/O bus.
● Asymmetric Multiprocessing (AMP): One CPU is designated as the master processor, and
the others are slave processors.15 The master processor assigns tasks to the slave processors.16
4. What are the main purposes of an operating system? Explain the five major activities of
an operating system concerning process management.
The main purposes of an OS are:
● Convenience: Makes the computer easier to use.
● Efficiency: Manages resources efficiently.17
● Ability to Evolve: Allows for the introduction of new features without disrupting existing
services.
5. What is the difference between hard real-time and soft real-time systems?
● Hard Real-Time: Missing a deadline results in a complete system failure. Strict time
constraints must be met without exception.
● Soft Real-Time: Missing a deadline results in degraded performance but not a complete
system failure. Time constraints are important but not absolutely critical.
7. What is a virtual machine? With a neat diagram, explain the working of a virtual
machine. What are the benefits of a virtual machine?
+-------------------------------------------------+
| Host Operating System |
+------------------------+------------------------+
| Virtual Machine Manager | Virtual Machine Manager |
| (Hypervisor) | (Hypervisor) |
+------------------------+------------------------+
| Guest OS 1 | Guest OS 2 |
+------------------------+------------------------+
| Applications 1 | Applications 2 |
+------------------------+------------------------+
| Virtual Hardware | Virtual Hardware |
+-------------------------------------------------+
| Physical Hardware (Host Machine) |
+-------------------------------------------------+
System calls provide an interface between user-level programs and the operating system kernel.29
They allow programs to request services from the OS, such as file I/O, memory allocation, and
process management.30
Memory management is the function of the OS that manages the computer's memory.31 Its
responsibilities include:
● Allocating and deallocating memory: Assigning memory to processes and reclaiming it
when they are finished.32
● Keeping track of memory usage: Monitoring how much memory is being used by different
processes.
● Swapping: Moving processes between main memory and secondary storage to increase the
degree of multiprogramming.33
● Memory protection: Preventing processes from accessing memory belonging to other
processes.34
● Virtual memory: Providing an illusion of larger memory than physically available.