0% found this document useful (0 votes)
7 views

OS QuestionAnswer

Uploaded by

webdeveloper8722
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

OS QuestionAnswer

Uploaded by

webdeveloper8722
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

1.

Define a Process and Explain the Different States of a Process


A process is an instance of a program in execution, which includes the program code,
current activity (represented by the program counter), contents of the processor registers,
and the process stack. It is the fundamental unit of work in an operating system.
Different States of a Process:
1. New: The process is being created.
2. Ready: The process is waiting to be assigned to a processor. It has all necessary
resources but is not currently executing.
3. Running: The process is currently being executed on the CPU.
4. Waiting (Blocked): The process is waiting for some event to occur (like I/O
completion) or for resources to become available.
5. Terminated: The process has finished execution and is being removed from the
system.
2. Explain Long Term Scheduler, Short Term Scheduler, and Medium Term Scheduler
1. Long Term Scheduler:
o Also known as the job scheduler, it determines which processes are admitted
to the system for processing. It controls the degree of multiprogramming by
managing the number of processes in memory. Long-term scheduling occurs
less frequently and selects processes from the queue and loads them into
memory.
2. Short Term Scheduler:
o Also known as the CPU scheduler, it selects from among the processes that are
in the ready state and allocates CPU time to one of them. This scheduler is
invoked frequently (every few milliseconds) and is responsible for ensuring
efficient CPU utilization and quick response times.
3. Medium Term Scheduler:
o It temporarily removes processes from main memory and places them in
secondary storage (swapping out) to improve the process mix and optimize the
CPU and memory utilization. The medium-term scheduler helps manage the
number of processes in the ready state by controlling the degree of
multiprogramming.
3. a) Define Context Switching and Co-operating Process
1. Context Switching:
o It is the process of storing the state of a currently running process and restoring
the state of a previously paused process. This allows multiple processes to
share a single CPU. During a context switch, the OS saves the context of the
current process and loads the context of the next scheduled process.
2. Co-operating Process:
o A co-operating process is one that can be affected by or can affect the
execution of another process. These processes share data and require
synchronization mechanisms (like semaphores or mutexes) to manage access
to shared resources, preventing race conditions.
b) What is the Optimization Criteria for Scheduling in a System?
The optimization criteria for scheduling can include:
• CPU Utilization: Maximize the percentage of time the CPU is busy.
• Throughput: Maximize the number of processes that complete their execution in a
given time.
• Turnaround Time: Minimize the total time taken from process submission to
completion.
• Waiting Time: Minimize the total time a process spends waiting in the ready queue.
• Response Time: Minimize the time from submission to the first response for
interactive processes.
4. Briefly Explain the Scheduling Criteria for Comparing Different CPU Scheduling
Algorithms
The scheduling criteria for comparing CPU scheduling algorithms include:
• Fairness: Ensuring all processes receive an equitable share of the CPU.
• Efficiency: Minimizing CPU idle time and maximizing throughput.
• Response Time: Particularly important for interactive systems, aiming for quick
responses to user inputs.
• Turnaround Time: Total time taken from submission to completion; shorter is better.
• Waiting Time: Time processes spend in the ready queue; shorter times are preferred.
• Starvation: Ensuring that no process is indefinitely delayed or denied CPU access.
• Preemption: The ability of the scheduler to interrupt a running process to allow
another process to run.
7. Implementing a Solution to the Readers-Writers Problem with Semaphores
The Readers-Writers problem involves synchronizing access to a shared resource (like a
database) where multiple processes may read from it simultaneously, but only one process
can write to it at any time. Here’s a semaphore-based solution:
1. Semaphores:
o readCount: Counts the number of readers currently accessing the resource.
o mutex: Ensures mutual exclusion when updating readCount.
o writeLock: Ensures exclusive access for writers.
2. Implementation:
c
Copy code
semaphore mutex = 1;// Protects readCount
semaphore writeLock = 1// Allows only one writer
int readCount = 0 // Number of readers

void Reader() {
wait(mutex// Enter critical section
readCount// Increment the number of readers
if (readCount == 1);// First reader
wait(writeLock); // Block writers
signal(mutex);// Exit critical section

// Read from the shared resource

wait(mutex); // Enter critical section


readCount--; // Decrement the number of readers
if (readCount == 0) // Last reader
signal(writeLock); // Allow writers
signal(mutex); // Exit critical section
}
void Writer() {
wait(writeLock); // Block other writers and readers
// Write to the shared resource
signal(writeLock); // Allow others to access the resource
}
8. Semaphore and Mutex in Context to Process Synchronization
• Semaphore:
o A synchronization primitive that allows processes to control access to shared
resources. It can be binary (0 or 1) or counting (can have a value greater than
1). Operations include wait() (decrement) and signal() (increment). Semaphores
can be used to manage access by multiple processes.
• Mutex (Mutual Exclusion):
o A locking mechanism used to enforce limits on access to a resource when
multiple processes are executing. A mutex allows only one thread to access the
resource at a time. It ensures mutual exclusion by requiring processes to lock
the mutex before accessing the resource and unlocking it afterward.
9. Producer-Consumer Problem and its Solution Using Semaphores
The Producer-Consumer problem involves two types of processes: producers that generate
data and consumers that use it. A bounded buffer is shared between them.
Solution Using Semaphores:
1. Semaphores:
o empty: Counts the number of empty slots in the buffer.
o full: Counts the number of filled slots in the buffer.
o mutex: Ensures mutual exclusion while accessing the buffer.
2. Implementation:
c
Copy code
semaphore empty = N; // Number of empty slots in the buffer
semaphore full = 0; // Number of filled slots in the buffer
semaphore mutex = 1; // Protects access to the buffer
int buffer[N]; // Shared buffer
int in = 0; // Next empty slot
int out = 0; // Next filled slot

void Producer() {
while (true) {
// Produce an item
wait(empty); // Decrement empty count
wait(mutex); // Enter critical section

// Add item to buffer


buffer[in] = item;
in = (in + 1) % N; // Circular increment

signal(mutex); // Exit critical section


signal(full); // Increment full count
}
}
void Consumer() {
while (true) {
wait(full); // Decrement full count
wait(mutex); // Enter critical section
// Remove item from buffer
item = buffer[out];
out = (out + 1) % N; // Circular increment
signal(mutex); // Exit critical section
signal(empty); // Increment empty count
// Consume the item
}
}
Justification for Mutual Exclusion:
• The mutex semaphore ensures that only one process can access the buffer at a time,
preventing race conditions when accessing shared data. The empty and full
semaphores ensure that the producer does not overfill the buffer and the consumer
does not consume from an empty buffer.
10. Peterson’s Solution for Avoiding Race Condition
Peterson’s solution is a classical algorithm for achieving mutual exclusion for two processes.
It uses two shared variables:
• flag[]: An array where each process indicates its intention to enter the critical section.
• turn: Indicates whose turn it is to enter the critical section.
Implementation:
int flag[2] = {0, 0}; // Flags for two processes
int turn; // Variable to indicate whose turn

void Process0() {
flag[0] = 1; // Indicate intent to enter critical section
turn = 1; // Give turn to Process 1
while (flag[1] == 1 && turn == 1); // Wait while Process 1 is interested

// Critical section

flag[0] = 0; // Exit critical section


}
void Process1() {
flag[1] = 1; // Indicate intent to enter critical section
turn = 0; // Give turn to Process 0
while (flag[0] == 1 && turn == 0); // Wait while Process 0 is interested
// Critical section
flag[1] = 0; // Exit critical section
}
Justification:
• Peterson's solution guarantees mutual exclusion by ensuring that only one process
can enter the critical section at a time. The use of flag indicates intent, and the turn
variable helps manage access between the two processes, thus avoiding race
conditions.
11. Monitors and Binary Semaphore
Monitors:
• A monitor is a higher-level synchronization construct that allows threads to have
mutually exclusive access to shared resources. It encapsulates the shared data and
the procedures that operate on that data, ensuring that only one thread can execute
any of the monitor’s procedures at a time. Monitors automatically manage locking
and unlocking, making synchronization easier to implement.
Binary Semaphore:
• A binary semaphore is a special type of semaphore that can take only two values: 0
and 1. It is used to control access to a single shared resource, functioning like a
mutex. If the value is 1, a process can enter the critical section by setting it to 0; if it’s
0, the process must wait.
12. Deadlock and Resource Allocation Graph
Deadlock:
• A deadlock occurs when two or more processes are unable to proceed because each
is waiting for the other to release resources. In a deadlock, a set of processes are
blocked because each process holds at least one resource and is waiting for another
resource that is held by another process.
Justification of the Statement:
• A cycle in a resource allocation graph indicates that a circular waiting condition might
exist, but it does not guarantee that a deadlock will occur. For example, if the
processes in the cycle are still able to release resources or if one of them can finish its
execution, the system may continue functioning normally without leading to a
deadlock. Thus, while a cycle suggests potential for deadlock, it is not a definitive
indication of deadlock itself.
13. Necessary Conditions for Deadlock to Occur
For a deadlock to occur, the following four conditions must hold simultaneously:
1. Mutual Exclusion: At least one resource must be held in a non-shareable mode. If
another process requests that resource, the requesting process must be delayed until
the resource is released.
2. Hold and Wait: A process holding at least one resource is waiting to acquire
additional resources that are currently being held by other processes.
3. No Preemption: Resources cannot be forcibly taken from a process holding them. A
resource can only be released voluntarily by the process after it has completed its
task.
4. Circular Wait: A set of processes are waiting for resources in such a way that each
process is waiting for a resource that is held by the next process in the cycle.
14. Differentiate Between Starvation and Deadlock

15. Deadlock Prevention Strategies


Deadlock prevention strategies aim to ensure that at least one of the four necessary
conditions for deadlock cannot hold. Here are the main strategies:
1. Eliminate Mutual Exclusion:
o Allow resources to be shared where possible, though this is not always feasible
for non-shareable resources like printers.
2. Eliminate Hold and Wait:
o Require processes to request all resources they need at once. If they cannot
obtain all required resources, they must release any currently held resources
and reattempt later.
3. Eliminate No Preemption:
o Allow preemption of resources. If a process is holding some resources and
requests additional resources that cannot be allocated, it must release its
current resources, allowing other processes to use them.
4. Eliminate Circular Wait:
o Impose a strict ordering of resource types. Each process can only request
resources in a predefined order, ensuring that there are no cycles in the
resource allocation graph.

You might also like