0% found this document useful (0 votes)
13 views

simp ans

Vtu questions and answers
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views

simp ans

Vtu questions and answers
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

​ ​ Operating Systems - SIMP - Answers- BCS303

Module - 05

1. Explain different file access methods and allocation methods (contiguous, linked,
indexed).

File Access Methods:


●​ Sequential Access: Files are accessed in a linear order, one record after another. This is
common for tape drives.
●​ Direct (Random) Access: Any record in a file can be accessed directly by its address. This is
common for disk drives.
●​ Indexed Sequential Access: A combination of sequential and direct access. An index is used
to locate a block of records, and then sequential access is used within that block.

File Allocation Methods:


●​ Contiguous Allocation: Each file occupies a contiguous block of disk space. Simple to
implement but suffers from external fragmentation.
●​ Linked Allocation: Each file is a linked list of disk blocks. No external fragmentation but
suffers from slow random access.
●​ Indexed Allocation: Each file has an index block that contains pointers to all its data blocks.
Supports both sequential and direct access efficiently.

2. Discuss various directory structures (tree-structured, acyclic graph) and methods of


implementing free space lists.

Directory Structures:
●​ Tree-Structured Directories: A hierarchical directory structure with a root directory and
subdirectories forming a tree. Simple to implement but doesn't allow sharing of files or
directories.
●​ Acyclic Graph Directories: Allows sharing of files and directories by allowing a directory to
have multiple parent directories. More complex to implement but provides greater flexibility.

Methods of Implementing Free Space Lists:


●​ Bit Vector: A bit map where each bit represents a disk block. A 0 indicates a free block, and a
1 indicates an allocated block.
●​ Linked List: Free blocks are linked together in a list.
●​ Grouping: Addresses of free blocks are stored in the first free block, and so on.
●​ Counting: Stores the address of the first free block and the number of contiguous free blocks.

3. Explain different disk scheduling algorithms (FCFS, SSTF, SCAN, C-SCAN, LOOK)
and calculate the total head movement for a given set of disk requests.

Disk Scheduling Algorithms: These algorithms determine the order in which disk requests are
serviced to minimize disk head movement and improve performance.
●​ FCFS (First-Come, First-Served): Requests are serviced in the order they arrive. Simple but
can lead to long head movements.
●​ SSTF (Shortest Seek Time First): The request with the shortest seek time from the current
head position is serviced next. Reduces head movement but can lead to starvation.
●​ SCAN (Elevator Algorithm): The disk head moves in one direction, servicing requests along
the way. When it reaches the end of the disk, it reverses direction.
●​ C-SCAN (Circular SCAN): Similar to SCAN, but when the head reaches the end of the disk,
it jumps back to the beginning without servicing any requests on the return trip.
●​ LOOK: Similar to SCAN, but the head only moves as far as the last request in each direction
before reversing.

4. With a neat diagram, describe tree-structured directories and acyclic graph directories.

5. Disk Scheduling Calculation:

Disk requests: 98, 183, 37, 122, 14, 124, 65, 67 Starting head position: 53
●​ FCFS: 53->98->183->37->122->14->124->65->67. Total head movement:
(45+85+146+85+108+110+59+2) = 640
●​ SSTF: 53->37->65->67->98->122->124->183->14. Total head movement:
(16+28+2+31+24+2+59+169) = 331
●​ SCAN: (Assuming head moves towards higher cylinder numbers first)
53->65->67->98->122->124->183->199(end)->0->14->37. Total head movement:
(12+2+31+24+2+59+16+199+14+23) = 382
●​ C-SCAN: 53->65->67->98->122->124->183->199->0->14->37. Total head movement:
(12+2+31+24+2+59+16+199+14+23) = 382
●​ LOOK: 53->65->67->98->122->124->183->14->37. Total head movement:
(12+2+31+24+2+59+169+23) = 322

6. Disk Scheduling Calculation (5000 Cylinders):

Current head: 143 Previous request: 125 Pending requests: 86, 1470, 913, 1774, 948, 1509, 1022,
1750, 130
●​ FCFS: 143->86->1470->913->1774->948->1509->1022->1750->130. Total:
57+1384+557+861+826+561+487+728+1620=7030
●​ SSTF: 143->130->86->913->948->1022->1470->1509->1750->1774. Total:
13+44+827+35+74+448+39+241+24=1765
●​ SCAN: (Moving towards higher cylinder numbers)
143->1470->1509->1750->1774->4999->0->86->130->913->948->1022. Total:
1327+39+241+24+3225+86+44+783+35+74=5878
●​ LOOK: (Moving towards higher cylinder numbers)
143->1470->1509->1750->1774->913->130->86. Total:
1327+39+241+24+861+783+44=3319

7. List the common file types along with their extensions and functions.

(This is a broad question, here are some examples)


●​ .txt: Plain text files.
●​ .pdf: Portable Document Format, for formatted documents.
●​ .exe: Executable files (Windows).
●​ .jpg/.png/.gif: Image files.
●​ .mp3/.wav: Audio files.
●​ .mp4/.avi: Video files.
●​ .doc/.docx: Microsoft Word documents.
●​ .xls/.xlsx: Microsoft Excel spreadsheets.
●​ .ppt/.pptx: Microsoft PowerPoint presentations.

8. Explain in detail file attributes.

File attributes are metadata associated with a file. Examples:


●​ Name: The file's name.
●​ Type: The file's type (e.g., text, executable).
●​ Size: The file's size in bytes.
●​ Location: The file's location on the disk.
●​ Creation time: The time the file was created.
●​ Last modification time: The time the file was last modified.
●​ Last access time: The time the file was last accessed.
●​ Protection: Access permissions (e.g., read, write, execute).

9. What are the operations possible on a file?


●​ Create: Create a new file.
●​ Write: Write data to a file.
●​ Read: Read data from a file.
●​ Reposition (Seek): Move the file pointer to a specific location in the file.
●​ Delete: Delete a file.
●​ Truncate: Erase the contents of a file.
●​ Open: Prepare a file for use​


​ ​ ​ ​ ​ Module - 04
1. Explain paging, segmentation, and the concept of swapping. Discuss different page
replacement algorithms (LRU, FIFO, Optimal).
●​ Paging: Divides the logical address space of a process into fixed-size units called pages and
the physical memory into frames of the same size. A page table maps logical pages to physical
frames.
●​ Segmentation: Divides the logical address space of a process into variable-size units called
segments. Each segment represents a logical unit of the program (e.g., code, data, stack). A
segment table maps logical segments to physical memory addresses.
●​ Swapping: Moves entire processes between main memory and secondary storage (e.g., disk).
This allows the system to run more processes than can fit in memory at once.

Page Replacement Algorithms: These algorithms decide which page to replace when a page
fault occurs and no free frames are available.
●​ LRU (Least Recently Used): Replaces the page that has not been used for the longest time.
●​ FIFO (First-In, First-Out): Replaces the page that has been in memory for the longest time.
●​ Optimal (OPT): Replaces the page that will not be used for the longest time in the future
(ideal but not practically implementable).

2. Explain different memory allocation algorithms (First-fit, Best-fit, Worst-fit) and discuss
internal and external fragmentation.

●​ First-fit: Allocates the first free partition that is large enough.


●​ Best-fit: Allocates the smallest free partition that is large enough.
●​ Worst-fit: Allocates the largest free partition.

Fragmentation:
●​ Internal Fragmentation: Occurs within a partition when the allocated memory is larger than
the requested memory. This happens in fixed partitioning schemes.
●​ External Fragmentation: Occurs when there is enough total free memory to satisfy a request,
but the free memory is not contiguous. This happens in variable partitioning schemes.

3. Explain shared pages and hashed page tables.


●​ Shared Pages: Multiple processes can share the same physical page in memory. This is
commonly used for shared libraries or code segments. It reduces memory usage and improves
performance.
●​ Hashed Page Tables: Use a hash function to map virtual page numbers to entries in the page
table. This is used for large address spaces to reduce the size of the page table.

4. What is paging? Explain with neat diagrams- paging and segmentation.


●​ Paging: (Diagram: Show logical address space divided into pages, physical address space
divided into frames, and a page table mapping pages to frames.)
●​ Segmentation: (Diagram: Show logical address space divided into segments of different sizes,
and a segment table mapping segments to physical memory addresses with base and limit
registers.)

5. Explain the Translation Lookaside Buffer (TLB) with a neat diagram.

The TLB is a cache that stores recent translations from virtual to physical addresses. It speeds up
memory access by avoiding the need to access the page table in main memory for every memory
access.​


6. Explain (i) inverted page tables with a diagram (ii) Thrashing (iii) Demand paging.

●​ (i) Inverted Page Tables: Instead of having a page table for each process, there is one page
table for the entire physical memory. Each entry in the table corresponds to a physical frame
and stores the virtual address of the page residing in that frame. (Diagram: Show the inverted
page table with entries indexed by frame number and containing process ID and page number.)
●​ (ii) Thrashing: Occurs when a process does not have enough frames allocated to it, leading to
excessive page faults. The process spends most of its time swapping pages in and out of
memory, resulting in very low CPU utilization.
●​ (iii) Demand Paging: Loads pages into memory only when they are needed (on demand).
This reduces memory usage and I/O overhead.

7. With a diagram, discuss the steps involved in handling a page fault.

(Diagram: Show the CPU, MMU, TLB, page table, and secondary storage.)
1.​ The CPU generates a logical address.
2.​ The MMU checks the TLB for the corresponding physical address.
3.​ If there's a TLB miss, the MMU accesses the page table in main memory.
4.​ If the page is not in memory (page fault), the MMU traps to the operating system.
5.​ The OS finds a free frame or uses a page replacement algorithm to select a victim frame.
6.​ The OS reads the required page from secondary storage into the frame.
7.​ The page table and TLB are updated.
8.​ The faulting instruction is restarted.

8. Explain the different page replacement algorithms. (This is same as it was covered in
question 1. Just refer back to that answer).

9. Consider the following page reference string 2,3,2,1,5,2,4,5,3,2,5,2. How many page faults
would occur in case of a) LRU b) FIFO c) Optimal page replacement algorithms assuming
3 frames. Note that initially all frames are empty.

Here's the page fault analysis:

a) LRU (Least Recently Used):


Reference Frame 1 Frame 2 Frame 3 Page Fault?
String

2 2 Yes

3 2 3 Yes

2 2 3 No

1 1 3 Yes

5 1 5 Yes

2 2 5 Yes

4 2 4 Yes

5 5 4 Yes

3 5 3 Yes

2 2 3 Yes

5 2 5 Yes

2 2 5 No

Total Page Faults (LRU): 10

b) FIFO (First-In, First-Out):


Reference Frame 1 Frame 2 Frame 3 Page Fault?
String

2 2 Yes

3 2 3 Yes

2 2 3 No

1 1 3 Yes

5 1 5 Yes

2 2 5 Yes

4 2 4 Yes

5 5 4 Yes

3 5 3 Yes

2 2 3 Yes

5 2 5 Yes

2 2 5 No

Total Page Faults (FIFO): 10

c) Optimal (OPT):
Reference Frame 1 Frame 2 Frame 3 Page Fault?
String

2 2 Yes

3 2 3 Yes

2 2 3 No

1 1 3 Yes

5 1 5 Yes

2 1 5 2 Yes

4 4 5 2 Yes

5 4 5 2 No

3 4 3 2 Yes

2 4 3 2


​ ​ ​ ​ ​ Module - 03

1. Explain semaphores and their usage in solving the dining philosophers and bounded
buffer problems.

A semaphore is an integer variable that, apart from initialization, is accessed only through two
atomic operations:
●​ wait() (or P()): Decrements the semaphore value. If the value becomes negative, the process
executing wait() is blocked until the value becomes non-negative.
●​ signal() (or V()): Increments the semaphore value. If there are processes blocked on the
semaphore, one of them is unblocked.

Dining Philosophers Problem:

Five philosophers are sitting at a circular table with a bowl of spaghetti in front of each and a
single fork between each pair of philosophers. To eat, a philosopher needs two forks – the one on
their left and the one on their right. The problem is to design a protocol that allows the
philosophers to eat without deadlock or starvation.

Semaphore Solution:

#define N 5 // Number of philosophers​


semaphore fork[N]; // Array of semaphores, one for each fork​

void philosopher(int i) {​
while (true) {​
think();​
wait(fork[i]); // Pick up left fork​
wait(fork[(i + 1) % N]); // Pick up right fork​
eat();​
signal(fork[(i + 1) % N]); // Put down right fork​
signal(fork[i]); // Put down left fork​
}​
}​

This solution can lead to deadlock if all philosophers pick up their left forks simultaneously. A
better solution is to allow only four philosophers to be at the table at the same time, or to use an
asymmetry in picking up forks (e.g., philosopher 0 picks up the right fork first).

Bounded Buffer Problem (Producer-Consumer Problem):

A producer process generates items and places them in a buffer. A consumer process takes items
from the buffer. The buffer has a fixed size. The problem is to synchronize the producer and
consumer so that the producer doesn't try to add to a full buffer and the consumer doesn't try to
take from an empty buffer.

Semaphore Solution:

semaphore mutex = 1; // Mutual exclusion for buffer access​


semaphore empty = N; // Number of empty slots in the buffer (initially N)​
semaphore full = 0; // Number of full slots in the buffer (initially 0)​

void producer() {​
while (true) {​
produce_item();​
wait(empty); // Wait for an empty slot​
wait(mutex); // Acquire mutex for buffer access​
insert_item();​
signal(mutex); // Release mutex​
signal(full); // Signal that a slot is now full​
}​
}​

void consumer() {​
while (true) {​
wait(full); // Wait for a full slot​
wait(mutex); // Acquire mutex for buffer access​
remove_item();​
signal(mutex); // Release mutex​
signal(empty); // Signal that a slot is now empty​
consume_item();​
}​
}​

2. What are deadlocks? Explain the necessary conditions for deadlock occurrence. Discuss
deadlock prevention methods, deadlock avoidance (Banker's algorithm), and deadlock
detection and recovery.

A deadlock is a situation where two or more processes are blocked indefinitely, waiting for each
other to release resources.

Necessary Conditions for Deadlock (Coffman Conditions):


●​ Mutual Exclusion: Resources are allocated in a mutually exclusive manner (only one process
can hold a resource at a time).
●​ Hold and Wait: A process holding at least one resource is waiting to acquire additional
resources held by other processes.1
●​ No Preemption: Resources cannot be forcibly taken away from a process holding them.
●​ Circular Wait: There exists a set {P1, P2, ..., Pn} of waiting processes such that P1 is waiting
for a resource held by P2, P2 is waiting for a resource held by P3, ..., Pn is waiting for a
resource held by P1.

Deadlock Prevention: Prevent one or more of the Coffman conditions from occurring.
Examples:
●​ Eliminate Mutual Exclusion: Not always possible (e.g., printers).
●​ Eliminate Hold and Wait: Require processes to request all their resources at once.
●​ Allow Preemption: Allow the OS to take resources away from a process.
●​ Eliminate Circular Wait: Impose a total ordering on resources and require processes to
request resources in increasing order.

Deadlock Avoidance (Banker's Algorithm): The OS has knowledge of the maximum resource
requirements of each process. It uses this information to make decisions about resource
allocation that avoid entering a deadlock state.

Deadlock Detection and Recovery: Allow deadlocks to occur and then detect and recover from
them.
●​ Detection: Use algorithms to detect cycles in the resource allocation graph.
●​ Recovery:
○​ Process Termination: Abort one or more deadlocked processes.
○​ Resource Preemption: Take resources away from one or more deadlocked processes.

3. Explain in brief race conditions.

A race condition occurs when multiple processes or threads access and manipulate shared data
concurrently, and the final outcome depends on2 the particular order in which the accesses take
place. This can lead to unpredictable and incorrect results.

4. Define the term critical section. What are the requirements for critical section problems?

A critical section is a section of code where a process accesses shared resources.

Requirements for Critical Section Problems:


●​ Mutual Exclusion: Only one process can be in its critical section at a time.
●​ Progress: If no process is in its critical section and some processes want to enter their critical
sections, only those processes that are not in their remainder sections can participate in
deciding which will enter its critical section next, and this selection cannot be postponed
indefinitely.3
●​ Bounded Waiting: There must be a limit on the amount of time a process has to wait to enter
its critical section.

5. Give the solution for the readers/writers problem using semaphores.

The readers/writers problem involves multiple processes that want to access a shared resource
(e.g., a file). Some processes are readers (only read the data), and some are writers (modify the
data). Multiple readers can access the resource concurrently, but only one writer can access it at a
time, and no readers can be present while a writer is writing.

Semaphore Solution:

semaphore mutex = 1; // Mutual exclusion for readcount​


semaphore wrt = 1; // Mutual exclusion for writers​
int readcount = 0; // Number of readers currently accessing the resource​

void writer() {​
wait(wrt); // Acquire write lock​
// Write to the resource​
signal(wrt); // Release write lock​
}​

void reader() {​
wait(mutex); // Acquire mutex to modify readcount​
readcount++;​
if (readcount == 1) {​
wait(wrt); // If this is the first reader, acquire write lock (prevent writers)​
}​
signal(mutex); // Release mutex​

// Read from the resource​

wait(mutex); // Acquire mutex to modify readcount​
readcount--;​
if (readcount == 0) {​
signal(wrt); // If this is the last reader, release write lock​
}​
signal(mutex); // Release mutex​
}​

6. Explain Peterson's algorithm.

Peterson's algorithm is a classic software-based solution to the critical section problem for two
processes.

int flag[2]; // flag[i] = true means process i wants to enter the critical section​
int turn; // Indicates whose turn it is to enter the critical section​

void process_i(int i) { // i is either 0 or 1​
int j = 1 - i; // j is the other process​
flag[i] = true;​
turn = j;​
while (flag[j] && turn == j); // Busy wait​
// Critical section​
flag[i] = false;​
// Remainder section​
}​

7. Explain the resource allocation graph.

A resource allocation graph is a directed graph used to model resource allocation in a system. It
consists of:
●​ Vertices: Represent processes (circles) and resources (rectangles).
●​ Edges:
○​ Request edge: Directed edge from a process to a resource, indicating that the process is
requesting that resource.
○​ Assignment edge: Directed edge from a resource to a process, indicating that the resource
has been allocated to that process.
A cycle in the resource allocation graph indicates a potential deadlock.

8. State the dining philosopher problem and give the solution for the same using monitors.

Monitor Solution:

A monitor is a high-level synchronization construct that provides mutual exclusion and condition
variables.

monitor DiningPhilosophers {​
enum {THINKING, HUNGRY, EATING} state[5];​
condition self[5];​

void pickup(int i) {​
state[i] = HUNGRY;​
test(i);​
if (state

Module - 02

1. What is a process? Explain the different states of a process with a state diagram. Also,
explain the structure of the Process Control Block (PCB).

A process is a program in execution. It's a dynamic entity, unlike a program, which is a static set
of instructions.

Process States and State Diagram:

A process can be in one of several states:


●​ New: The process is being created.
●​ Ready: The process is waiting to be assigned to a processor.
●​ Running: The process is currently being executed1 by the CPU.
●​ Blocked (Waiting): The process is waiting for an event to occur (e.g., I/O completion).
●​ Terminated (Completed): The process has finished execution.

State Diagram:

New --> Ready --> Running --> Blocked --> Ready​


^ | | ^​
| V V |​
+-------Terminated<----------+​

●​ New -> Ready: Process is admitted to the system.


●​ Ready -> Running: Scheduler selects the process for execution.
●​ Running -> Blocked: Process requests I/O or some other event that requires waiting.
●​ Running -> Ready: Time slice expires (preemptive scheduling) or a higher-priority process
becomes ready.
●​ Blocked -> Ready: The event the process was waiting for occurs.
●​ Running -> Terminated: Process completes its execution.

Process Control Block (PCB):

The PCB is a data structure that stores information about each process. It's essential for the OS to
manage processes effectively. Key components of a PCB include:
●​ Process ID (PID): A unique identifier for the process.
●​ Process State: The current state of the process (new, ready, running, blocked, terminated).
●​ Program Counter (PC): The address of the next instruction to be executed.
●​ CPU Registers: The values of the CPU registers for the process.
●​ Memory Management Information: Information about the memory allocated to the process
(e.g., base and limit registers, page tables).
●​ I/O Status Information: Information about I/O devices allocated to the process, open files,
etc.
●​ Scheduling Information: Process priority, scheduling queue pointers, etc.
●​ Accounting Information: CPU time used, time limits, etc.

2. Explain the different CPU scheduling algorithms and calculate the average waiting time
and average turnaround time for the given set of processes.

Process Arrival Time Burst Time

1 0 4

2 1 8

3 2 5

4 3 9

a) FCFS (First-Come, First-Served):


●​ Gantt Chart: | P1(4) | P2(8) | P3(5) | P4(9) |
●​ Waiting Times: P1=0, P2=3, P3=11, P4=16
●​ Turnaround Times: P1=4, P2=11, P3=16, P4=25
●​ Avg. Waiting Time: (0+3+11+16)/4 = 7.5
●​ Avg. Turnaround Time: (4+11+16+25)/4 = 14

b) SJF (Shortest Job First) - Non-Preemptive:


●​ Gantt Chart: | P1(4) | P3(5) | P2(8) | P4(9) |
●​ Waiting Times: P1=0, P3=4, P2=9, P4=17
●​ Turnaround Times: P1=4, P3=9, P2=17, P4=26
●​ Avg. Waiting Time: (0+4+9+17)/4 = 7.5
●​ Avg. Turnaround Time: (4+9+17+26)/4 = 14

c) SJF (Shortest Job First) - Preemptive (Shortest Remaining Time First - SRTF):
●​ Gantt Chart: | P1(1) | P3(1) | P1(3) | P2(7) | P3(4) | P4(9) |
●​ Waiting Times: P1=0+2=2, P2=6, P3=1+3=4, P4=16
●​ Turnaround Times: P1=4, P2=14, P3=9, P4=25
●​ Avg. Waiting Time: (2+6+4+16)/4 = 7
●​ Avg. Turnaround Time: (4+14+9+25)/4 = 13

d) Priority (Assuming lower number = higher priority. If priorities are not explicitly given,
this cannot be calculated.)
●​ Cannot be calculated without priority values.

e) Round Robin (Time Quantum = 1):


●​ Gantt Chart: | P1 | P2 | P3 | P4 | P1 | P2 | P3 | P4 | P1 | P2 | P3 | P4 | P1 | P2 | P4 | P2 | P4 | P4 |
●​ Waiting Times: P1=0+3+3+3=9, P2=1+3+3+3+1+1=12, P3=2+3+3=8, P4=3+3+3+1+1+1=12
●​ Turnaround Times: P1=12, P2=16, P3=13, P4=18
●​ Avg. Waiting Time: (9+12+8+12)/4 = 10.25
●​ Avg. Turnaround Time: (12+16+13+18)/4 = 14.75

3. Discuss (i) multilevel queue scheduling and multilevel feedback queue scheduling. (ii)
Non-preemptive and preemptive scheduling.

i) Multilevel Queue Scheduling:

Processes are divided into different queues based on their properties (e.g., interactive, batch,
system). Each queue has its own scheduling algorithm. For example:
●​ System processes: Highest priority, often scheduled with Round Robin.
●​ Interactive processes: High priority, often scheduled with Round Robin.
●​ Batch processes: Lower priority, often scheduled with FCFS.

Problems: Strict hierarchy can lead to starvation of lower-priority queues.


Multilevel Feedback Queue Scheduling:

An improvement over multilevel queues. Processes can move between queues based on their
behavior. For example:
●​ A process that uses too much CPU time can be moved to a lower-priority queue.
●​ A process that waits too long in a lower-priority queue can be moved to a higher-priority
queue (aging).

This approach provides more flexibility and prevents starvation.

ii) Non-Preemptive and Preemptive Scheduling (Revisited):


●​ Non-Preemptive: Once a process is allocated the CPU, it runs until it completes or
voluntarily releases the CPU. Simple to implement but can lead to long waiting times for short
processes if a long process is running. Examples: FCFS, Non-preemptive SJF.
●​ Preemptive: The OS can interrupt a running process and allocate the CPU to another process.
More complex to implement but provides better responsiveness and fairness. Examples:
Round Robin, Preemptive SJF (SRTF), Priority scheduling with preemption.

Key differences are responsiveness and overhead. Preemptive scheduling has higher overhead
due to context switching but provides better responsiveness for interactive systems.
Non-preemptive is simpler but less responsive.

4. Explain the short-term, medium-term, and long-term schedulers.


●​ Short-term scheduler (CPU scheduler): Selects a process from the ready queue and
allocates the CPU to it. It is invoked very frequently (milliseconds) to ensure efficient CPU
utilization.
●​ Medium-term scheduler: Swaps processes in and out of main memory. It reduces the degree
of multiprogramming to improve system performance or to free up memory.
●​ Long-term scheduler (Job scheduler): Selects processes from a job pool and loads them into
memory for execution. It controls the degree of multiprogramming by determining which jobs
are admitted to the system.

5. Explain the multithread model. What are the benefits of multithreaded programming?

A thread is a lightweight unit of execution within a process. Multithreading allows multiple


threads to run concurrently within a single process, sharing1 the same code, data, and resources.

Benefits of Multithreaded Programming:


●​ Responsiveness: A program can remain responsive even if part of it is blocked or performing
a long operation.
●​ Resource Sharing: Threads share the same memory space, making it easier to share data
between them.
●​ Economy: Creating and switching between threads is less expensive than creating and
switching between processes.
●​ Utilization of Multiprocessor Architectures: Threads can run in parallel on different
processors, increasing performance.

6. Compare user-level threads and kernel-level threads.

Feature User-Level Threads Kernel-Level Threads

Management Managed by a user-level Managed by the operating


library system kernel

Switching Fast (no kernel Slower (kernel


involvement) involvement)

Blocking If one thread blocks, the If one thread blocks, other


entire process blocks threads can continue

Implementation Easy to implement More complex to


implement

Examples POSIX Pthreads, Java Windows threads, Linux


threads pthreads (sometimes)

7. Explain non-preemptive and preemptive scheduling.


●​ Non-preemptive scheduling: Once a process is allocated the CPU, it runs until it completes
or voluntarily releases the CPU (e.g., by performing I/O).
●​ Preemptive scheduling: The OS can interrupt a running process and allocate the CPU to
another process based on scheduling criteria (e.g., higher priority).

8. Scheduling Calculations (FCFS, SJF, Priority, Round Robin)

1) FCFS (First-Come, First-Served):


●​ Gantt Chart: | P1(10) | P2(8) | P3(2) | P4(4) |
●​ Waiting Times: P1=0, P2=10, P3=18, P4=20
●​ Turnaround Times: P1=10, P2=18, P3=20, P4=24
●​ Average Waiting Time: (0+10+18+20)/4 = 12 ms
●​ Average Turnaround Time: (10+18+20+24)/4 = 18 ms

2) SJF (Shortest Job First - Non-Preemptive):


●​ Gantt Chart: | P3(2) | P4(4) | P2(8) | P1(10) |
●​ Waiting Times: P3=0, P4=2, P2=6, P1=14
●​ Turnaround Times: P3=2, P4=6, P2=14, P1=24
●​ Average Waiting Time: (0+2+6+14)/4 = 5.5 ms
●​ Average Turnaround Time: (2+6+14+24)/4 = 11.5 ms

3) Priority (Non-Preemptive): (Lower number = Higher Priority)


●​ Gantt Chart: | P2(8) | P4(4) | P1(10) | P3(2) |
●​ Waiting Times: P2=0, P4=8, P1=12, P3=22
●​ Turnaround Times: P2=8, P4=12, P1=22, P3=24
●​ Average Waiting Time: (0+8+12+22)/4 = 10.5 ms
●​ Average Turnaround Time: (8+12+22+24)/4 = 16.5 ms

4) Round Robin (Time Quantum = 2 ms):


●​ Gantt Chart: | P1(2) | P2(2) | P3(2) | P4(2) | P1(2) | P2(2) | P1(2) | P2(2) | P1(2) | P1(2) |
●​ Waiting Times: P1=0+4+4+4+2=14, P2=2+4+2+2=10, P3=4, P4=6
●​ Turnaround Times: P1=10, P2=8, P3=6, P4=8
●​ Average Waiting Time: (14+10+4+6)/4 = 8.5 ms
●​ Average Turnaround Time: (10+8+6+8)/4 = 8 ms

9. Scheduling with Given Processes

Here's the analysis for processes P1-P5 with given burst times and priorities:

1) Gantt Charts:
●​ FCFS: | P1(10) | P2(1) | P3(2) | P4(1) | P5(5) |
●​ SJF: | P2(1) | P4(1) | P3(2) | P5(5) | P1(10) |
●​ Priority: | P2(1) | P5(5) | P1(10) | P3(2) | P4(1) |
●​ Round Robin (Quantum=1): | P1 | P2 | P3 | P4 | P5 | P1 | P3 | P5 | P1 | P5 | P1 | P1 | P1 | P1 |

2) Turnaround Times:

Algorithm P1 P2 P3 P4 P5

FCFS 10 11 13 14 19

SJF 19 1 3 2 7

Priority 16 1 18 19 6
Algorithm P1 P2 P3 P4 P5

Round 14 2 7 4 10
Robin

3) Waiting Times:

Algorithm P1 P2 P3 P4 P5

FCFS 0 10 11 13 14

SJF 9 0 1 0 2

Priority 6 0 16 18 1

Round 4 1 5 3 5
Robin

Module - 01

1. What is an operating system? Explain the different operating system services and discuss
the layered and microkernel approaches.

An operating system (OS) is system software that manages computer hardware and software
resources and provides common services for computer programs.12 It acts as an intermediary
between the user and the computer hardware.3

Operating System Services:


●​ User Interface (UI): Provides a way for users to interact with the system (e.g., command-line
interface, graphical user interface).4
●​ Program Execution: Loads programs into memory and executes them.5
●​ I/O Operations: Handles input/output operations between programs and peripheral devices.
●​ File System Manipulation: Manages files and directories, allowing users to create, delete,
read, and write files.6
●​ Communications: Facilitates communication between processes on the same or different
computers.7
●​ Error Detection: Detects and handles errors that occur during system operation.8
●​ Resource Allocation: Allocates system resources (e.g., CPU time, memory, I/O devices) to
different processes.
●​ Accounting: Keeps track of resource usage for billing or performance monitoring.9
●​ Security: Provides security features such as user authentication and access control.

Layered Approach: The OS is organized into layers, with each layer providing services to the
layer above it and using services from the layer below it. This simplifies design and
implementation but can reduce efficiency due to overhead between layers.10

Microkernel Approach: The OS has a small kernel that provides only essential services (e.g.,
inter-process communication, basic memory management). Other services (e.g., file systems,
device drivers) are implemented as user-level processes. This improves modularity and
robustness but can also reduce performance due to increased inter-process communication.

2. Define the essential properties of the following types of operating systems:


●​ a) Time-sharing: Allows multiple users to share a computer simultaneously by rapidly
switching the CPU between them.11 Key properties: Multitasking, interactive, fast response
times.
●​ b) Distributed: Runs on a network of computers, allowing them to work together as a single
system.12 Key properties: Resource sharing, high availability, fault tolerance.
●​ c) Real-time: Designed for applications with strict time constraints, where actions must occur
within precise deadlines.13 Key properties: Deterministic response times, reliability,
responsiveness to external events.
●​ d) Multiprogramming: Allows multiple programs to reside in memory at the same time,
increasing CPU utilization by overlapping I/O and CPU operations. Key properties: Increased
CPU utilization, improved throughput.

3. Explain the multiprocessor system and differentiate between symmetric and asymmetric
multiprocessing.

A multiprocessor system uses multiple CPUs within a single computer system.14 This allows for
parallel processing and increased performance.
●​ Symmetric Multiprocessing (SMP): All CPUs are treated equally and can run any process.
They share the same memory and I/O bus.
●​ Asymmetric Multiprocessing (AMP): One CPU is designated as the master processor, and
the others are slave processors.15 The master processor assigns tasks to the slave processors.16

4. What are the main purposes of an operating system? Explain the five major activities of
an operating system concerning process management.
The main purposes of an OS are:
●​ Convenience: Makes the computer easier to use.
●​ Efficiency: Manages resources efficiently.17
●​ Ability to Evolve: Allows for the introduction of new features without disrupting existing
services.

Five Major Activities of OS Concerning Process Management:


●​ Creating and deleting processes: Initiating and terminating processes.18
●​ Suspending and resuming processes: Pausing and restarting process execution.19
●​ Providing mechanisms for process synchronization: Coordinating the execution of multiple
processes that share resources.
●​ Providing mechanisms for process communication: Enabling processes to exchange data.20
●​ Providing mechanisms for deadlock handling: Preventing or resolving situations where
processes are blocked indefinitely waiting for each other.21

5. What is the difference between hard real-time and soft real-time systems?
●​ Hard Real-Time: Missing a deadline results in a complete system failure. Strict time
constraints must be met without exception.
●​ Soft Real-Time: Missing a deadline results in degraded performance but not a complete
system failure. Time constraints are important but not absolutely critical.

6. Explain the distinguishing features of real-time systems and multiprocessor systems.


●​ Real-time systems: Focus on meeting strict time deadlines.22 Deterministic behavior is
crucial.
●​ Multiprocessor systems: Focus on increasing processing power through parallel execution.

7. What is a virtual machine? With a neat diagram, explain the working of a virtual
machine. What are the benefits of a virtual machine?

A virtual machine (VM) is a software emulation of a computer system.23 It runs on top of a


physical machine (the host) and provides a virtualized hardware environment to a guest
operating system.24

(Diagram: A simple representation)

+-------------------------------------------------+​
| Host Operating System |​
+------------------------+------------------------+​
| Virtual Machine Manager | Virtual Machine Manager |​
| (Hypervisor) | (Hypervisor) |​
+------------------------+------------------------+​
| Guest OS 1 | Guest OS 2 |​
+------------------------+------------------------+​
| Applications 1 | Applications 2 |​
+------------------------+------------------------+​
| Virtual Hardware | Virtual Hardware |​
+-------------------------------------------------+​
| Physical Hardware (Host Machine) |​
+-------------------------------------------------+​

Benefits of a Virtual Machine:


●​ Resource Consolidation: Multiple VMs can run on a single physical machine, improving
hardware utilization.25
●​ Isolation: VMs are isolated from each other, preventing problems in one VM from affecting
others.26
●​ Portability: VMs can be easily moved between different physical machines.27
●​ Testing and Development: VMs provide a safe environment for testing software and
configurations.28

8. What is the purpose of system calls?

System calls provide an interface between user-level programs and the operating system kernel.29
They allow programs to request services from the OS, such as file I/O, memory allocation, and
process management.30

9. Explain the function of memory management.

Memory management is the function of the OS that manages the computer's memory.31 Its
responsibilities include:
●​ Allocating and deallocating memory: Assigning memory to processes and reclaiming it
when they are finished.32
●​ Keeping track of memory usage: Monitoring how much memory is being used by different
processes.
●​ Swapping: Moving processes between main memory and secondary storage to increase the
degree of multiprogramming.33
●​ Memory protection: Preventing processes from accessing memory belonging to other
processes.34
●​ Virtual memory: Providing an illusion of larger memory than physically available.

You might also like