0% found this document useful (0 votes)
16 views

Unit 3-1 Operating System

The document discusses CPU scheduling and deadlock management. It covers topics like scheduling criteria, algorithms, deadlocks characteristics and methods for handling them. It also includes examples and explanations of scheduling algorithms like first-come first-served, shortest-job-first and priority scheduling.
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views

Unit 3-1 Operating System

The document discusses CPU scheduling and deadlock management. It covers topics like scheduling criteria, algorithms, deadlocks characteristics and methods for handling them. It also includes examples and explanations of scheduling algorithms like first-come first-served, shortest-job-first and priority scheduling.
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 101

Unit 3 CPU Scheduling and Deadlock Management

• CPU Scheduling: Scheduling Criteria - Scheduling


Algorithms.
• Deadlocks: Deadlock Characterization - Methods for
Handling Deadlocks - Deadlock Prevention – Deadlock
Avoidance - Deadlock Detection - Recovery from Deadlock.
• Case Study: Real Time CPU scheduling
CPU Scheduling
• CPU Scheduling is a process that allows one process to use
the CPU while another process is delayed (in standby) due to
unavailability of any resources such as I/O etc, thus making
full use of the CPU.
CPU Scheduling
• In a single-processor system, only one process can run at a time.
Others must wait until the CPU is free and can be rescheduled.
• The objective of multiprogramming is to have some process running
at all times, to maximize CPU utilization.
• Several processes are kept in memory at one time. When one process
has to wait, the operating system takes the CPU away from that
process and gives the CPU to another process. This pattern continues.
Every time one process has to wait, another process can take over use
of the CPU.
CPU Burst time
• CPU Burst time, also referred to as “execution time”. It is the amount
of CPU time the process requires to complete its execution.
• It is the amount of processing time required by a process to execute a
specific task or unit of a job.
I/O Burst Time
• the amount of time, a process waits for input-output before needing
CPU time
CPU –I/O Burst Cycle
• Process execution begins with a
CPU burst.
• That is followed by an I/O burst,
which is followed by another
CPU burst, then another I/O
burst, and so on.
• Eventually, the final CPU burst
ends with a system request to
terminate execution
CPU Scheduler(Short-Term Scheduler)
Whenever the CPU becomes idle, the operating system must select one
of the processes in the ready queue to be executed.
The selection process is carried out by the short-term scheduler, or CPU
scheduler.
CPU-scheduling decisions
CPU-scheduling decisions may take place under the following four
circumstances:
1. When a process switches from the running state to the waiting state
(for example, as the result of an I/O request or an invocation of
wait() for the termination of a child process)
2. When a process switches from the running state to the ready state
(for example, when an interrupt occurs)
3. When a process switches from the waiting state to the ready state
(for example, at completion of I/O)
4. When a process terminates
Preemptive Scheduling (con…)
• Preemptive scheduling is used when a process switches from the
running state to the ready state or from the waiting state to the ready
state.
• The resources (mainly CPU cycles) are allocated to the process for a
limited amount of time and then taken away, and the process is again
placed back in the ready queue if that process still has CPU burst time
remaining. That process stays in the ready queue till it gets its next
chance to execute.
Non-Preemptive Scheduling

• Non-preemptive Scheduling is used when a process terminates, or a


process switches from running to the waiting state.
• In this scheduling, once the resources (CPU cycles) are allocated to a
process, the process holds the CPU till it gets terminated or reaches a
waiting state.
Dispatcher
• The dispatcher is the module that gives control of the CPU to the
process selected by the short-term scheduler
Scheduling Criteria
1. CPU Utilization
• CPU utilization is the percentage of time the CPU is busy processing a
task.
• Conceptually, CPU utilization can range from 0 to 100 percent. In a
real system, it should range from 40 percent (for a lightly loaded
system) to 90 percent(for a heavily loaded system).
2. Throughput
• The number of processes that are completed per time unit
3. Turnaround time
• Time it takes for a task or process to complete from the moment it is
submitted to the system until it is fully processed and ready for output.
• Turnaround time is the sum of the periods spent waiting to get into
memory, waiting in the ready queue, executing on the CPU, and doing
I/O.
4. Waiting time
• Waiting time is the amount of time a task or process waits in the ready
queue before it is processed by the CPU
5.Response time
• Response Time measures the time it takes for the first response to be
received after a task has been initiated.
Scheduling Algorithms
• First-Come, First-Served Scheduling
• Shortest-Job-First Scheduling
• Priority Scheduling
• Round-Robin Scheduling
• Multilevel Queue Scheduling
• Multilevel Feedback Queue Scheduling
First-Come, First-Served Scheduling
• The process that requests the CPU first is allocated the CPU first.
• The implementation of the FCFS policy is easily managed with a FIFO
queue.
• When a process enters the ready queue, its PCB is linked onto the tail
of the queue. When the CPU is free, it is allocated to the process at
the head of the queue.
• The code for FCFS scheduling is simple to write and understand.
• The average waiting time under the FCFS policy is often quite long
Example

milliseconds

If the processes arrive in the order P1, P2, P3, and are served in FCFS
order,

Gantt chart
The waiting time is
• 0 milliseconds for process P1,
• 24 milliseconds for process P2, and
• 27 milliseconds for process P3.
• The average waiting time is (0 + 24 + 27)/3 = 17 milliseconds
Example
If the processes arrive in the order P2, P3, P1, and are served in FCFS
order,

Gantt chart
The waiting time is
• 0 milliseconds for process P2,
• 3 milliseconds for process P3, and
• 6 milliseconds for process P1.
• The average waiting time is now (6 + 0 + 3)/3 = 3 milliseconds.
• The average waiting time under an FCFS policy is generally not
minimal and may vary substantially.
Convoy effect
All the other processes wait for the one big process to get off the CPU is
called Convoy effect.
• FCFS scheduling algorithm is nonpreemptive.
• Once the CPU has been allocated to a process, that process keeps the
CPU until it releases the CPU, either by terminating or by requesting
I/O.
• The FCFS algorithm is thus particularly troublesome for time-sharing
systems, where it is important that each user get a share of the CPU
at regular intervals. It would be disastrous to allow one process to
keep the CPU for an extended period.
Shortest-Job-First Scheduling
• Shortest Job First (SJF) is a type of disk scheduling algorithm
in the operating system in which the processor executes the
job first that has the smallest execution time.
• In the shortest Job First algorithm, the processes are
scheduled according to the burst time of these processes.
Example
Example
The waiting time is
• 3 milliseconds for process P1,
• 16 milliseconds for process P2,
• 9 milliseconds for process P3, and
• 0 milliseconds for process P4.
Thus, the average waiting time is (3 + 16 + 9 + 0)/4 = 7 milliseconds.
By using FCFS scheduling scheme, the average waiting time would be
10.25 milliseconds
• The SJF scheduling algorithm is provably optimal, in that it gives the
minimum average waiting time for a given set of processes.
• Moving a short process before a long one decreases the waiting time
of the short process more than it increases the waiting time of the
long process. Consequently, the average waiting time decreases.
• SJF algorithm can be either preemptive or nonpreemptive.
• The choice arises when a new process arrives at the ready queue
while a previous process is still executing.
• The next CPU burst of the newly arrived process may be shorter than
what is left of the currently executing process.
• A preemptive SJF algorithm will preempt the currently executing
process, whereas a nonpreemptive SJF algorithm will allow the
currently running process to finish its CPU burst.
• Preemptive SJF scheduling is sometimes called shortest-remaining-
time-first scheduling.
Non-Preemptive SJF

In non-preemptive scheduling, once the CPU cycle is allocated to


process, the process holds it till it reaches a waiting state or
terminated.
• Step 0) At time=0, P4 arrives and starts execution

• Step 1) At time= 1, Process P3 arrives. But, P4 still needs 2 execution


units to complete. It will continue execution.
• Step 2) At time =2, process P1 arrives and is added to the waiting
queue. P4 will continue execution.

Step 3) At time = 3, process P4 will finish its execution. The burst time
of P3 and P1 is compared. Process P1 is executed because its burst
time is less compared to P3.
• Step 4) At time = 4, process P5 arrives and is added to the waiting
queue. P1 will continue execution.

• Step 5) At time = 5, process P2 arrives and is added to the waiting


queue. P1 will continue execution.
• Step 6) At time = 9, process P1 will finish its execution. The burst time of P3,
P5, and P2 is compared. Process P2 is executed because its burst time is the
lowest.

• Step 7) At time=10, P2 is executing and P3 and P5 are in the waiting queue.


• Step 8) At time = 11, process P2 will finish its execution. The burst time of P3
and P5 is compared. Process P5 is executed because its burst time is lower.

• Step 9) At time = 15, process P5 will finish its execution.


• Step 10) At time = 23, process P3 will finish its execution.

• Step 9) At time = 15, process P5 will finish its execution.


Average waiting time
Wait time of
P4= 0-0=0
P1= 3-2=1
P2= 9-5=4
P5= 11-4=7
P3= 15-1=14
Average Waiting Time= 0+1+4+7+14/5 = 26/5 = 5.2
• Class work Problem
Preemptive SJF
• In Preemptive SJF Scheduling, jobs are put into the ready queue as
they come. A process with shortest burst time begins execution. If a
process with even a shorter burst time arrives, the current process is
removed or preempted from execution, and the shorter job is
allocated CPU cycle.
• Step 0) At time=0, P4 arrives and starts execution

• Step 1) At time= 1, Process P3 arrives. But, P4 has a shorter burst


time. It will continue execution.
• Step 2) At time = 2, process P1 arrives with burst time = 6. The burst
time is more than that of P4. Hence, P4 will continue execution.

• Step 3) At time = 3, process P4 will finish its execution. The burst


time of P3 and P1 is compared. Process P1 is executed because its
burst time is lower.
• Step 4) At time = 4, process P5 will arrive. The burst time of P3, P5,
and P1 is compared. Process P5 is executed because its burst time is
lowest. Process P1 is preempted.
• Step 5) At time = 5, process P2 will arrive. The burst time of P1, P2,
P3, and P5 is compared. Process P2 is executed because its burst
time is least. Process P5 is preempted.
• Step 6) At time =6, P2 is executing.
• Step 7) At time =7, P2 finishes its execution. The burst time of P1, P3,
and P5 is compared. Process P5 is executed because its burst time is
lesser.
• Step 8) At time =10, P5 will finish its execution. The burst time of P1
and P3 is compared. Process P1 is executed because its burst time is
less.
• Step 9) At time =15, P1 finishes its execution. P3 is the only process left.
It will start execution.
• Step 10) At time =23, P3 finishes its execution.
Average waiting time
Wait time of
P4= 0-0=0
P1= (3-2) + 6 =7
P2= 5-5 = 0
P5= 4-4+2 =2
P3= 15-1 = 14

Average Waiting Time = 0+7+0+2+14/5 = 23/5 =4.6


Priority Scheduling
• A priority is associated with each process, and the
CPU is allocated to the process with the highest
priority.
• Equal-priority processes are scheduled in FCFS order.
Example
Average waiting time is 8.2 milliseconds
Internal or external Priorities
• Priorities can be defined either internally or externally. Internally
defined priorities use some measurable quantity or quantities to
compute the priority of a process. For example, time limits, memory
requirements, the number of open files, and the ratio of average I/O
burst to average CPU burst have been used in computing priorities.
• External priorities are set by criteria outside the operating system,
such as the importance of the process, the type and amount of funds
being paid for computer use, the department sponsoring the work,
and other, often political, factors
Drawback
• A major problem with priority scheduling algorithms is indefinite
blocking, or starvation.
• A priority scheduling algorithm can leave some low priority processes
waiting indefinitely.
• In a heavily loaded computer system, a steady stream of higher-
priority processes can prevent a low-priority process from ever getting
the CPU.
Solution
• A solution to the problem of indefinite blockage of low-priority
processes is aging.
• Aging involves gradually increasing the priority of processes that wait
in the system for a long time.
Round-Robin Scheduling
• The round-robin (RR) scheduling algorithm is designed especially for
time sharing systems. It is similar to FCFS scheduling, but preemption
is added to enable the system to switch between processes. A small
unit of time, called a time quantum or time slice, is defined.
• A time quantum is generally from 10 to 100 milliseconds in length.
• To implement RR scheduling, we again treat the ready queue as a
FIFO queue of processes. New processes are added to the tail of the
ready queue. The CPU scheduler picks the first process from the ready
queue, sets a timer to interrupt after 1 time quantum, and dispatches
the process.
Example

time quantum of 4 milliseconds

Process P1 gets the first 4 milliseconds. Since it requires another 20


milliseconds, it is preempted after the first time quantum, and the CPU
is given to the next process in the queue, process P2.
Process P2 does not need 4 milliseconds, so it quits before its time
quantum expires. The CPU is then given to the next process, process
P3.
The average waiting time for this schedule.
• P1 waits for 6 milliseconds (10 - 4),
• P2 waits for 4 milliseconds, and
• P3 waits for 7 milliseconds.
• Thus, the average waiting time is 17/3 = 5.66 milliseconds.
Multilevel Queue Scheduling
• A multilevel queue scheduling algorithm partitions the ready queue
into several separate queues.
• The processes are permanently assigned to one queue, generally
based on some property of the process, such as memory size, process
priority, or process type.
• Each queue has its own scheduling algorithm.
• For example, separate queues might be used for foreground and
background processes.
• The foreground queue might be scheduled by an RR algorithm, while
the background queue is scheduled by an FCFS algorithm
Example of a multilevel queue scheduling algorithm with five queues,
listed below in order of priority:
• 1. System processes
• 2. Interactive processes
• 3. Interactive editing processes
• 4. Batch processes
• 5. Student processes
Multilevel Feedback Queue Scheduling
• The multilevel feedback queue scheduling algorithm, in contrast,
allows a process to move between queues. The idea is to separate
processes according to the characteristics of their CPU bursts.
• If a process uses too much CPU time, it will be moved to a lower-
priority queue.
• This scheme leaves I/O-bound and interactive processes in the higher-
priority queues.
• In addition, a process that waits too long in a lower-priority queue
may be moved to a higher-priority queue. This form of aging prevents
starvation.
• Multilevel feedback queue scheduler with three queues, numbered
from 0 to 2.
• The scheduler first executes all processes in queue 0. Only when
queue 0 is empty will it execute processes in queue 1.
• Similarly, processes in queue 2 will be executed only if queues 0 and 1
are empty. A process that arrives for queue 1 will preempt a process
in queue 2.
• A process in queue 1 will in turn be preempted by a process arriving
for queue 0.
• A process entering the ready queue is put in queue 0.
• A process in queue 0 is given a time quantum of 8 milliseconds. If it
does not finish within this time, it is moved to the tail of queue 1.
• If queue 0 is empty, the process at the head of queue 1 is given a
quantum of 16 milliseconds.
• If it does not complete, it is preempted and is put into queue 2.
Processes in queue 2 are run on an FCFS basis but are run only when
queues 0 and 1 are empty.
Deadlocks
• Deadlock is a situation where a set of processes are blocked because
each process is holding a resource and waiting for another resource
acquired by some other process.
Deadlock Characterization
• In a deadlock, processes never finish executing, and system resources
are tied up, preventing other jobs from starting.
• Features that characterize deadlocks are (Conditions for Deadlock)
1. Mutual Exclusion
2. Hold and wait
3. No pre-emption
4. Circular Wait
• Mutual Exclusion: Only one process can use a resource at any given
time i.e. the resources are non-sharable.
• Hold and wait: A process is holding at least one resource at a time
and is waiting to acquire other resources held by some other process.
• No preemption: The resource can be released by a process voluntarily
i.e. after execution of the process.
• Circular Wait: A set of processes are waiting for each other in a
circular fashion.
Methods for Handling Deadlocks
1. Deadlock prevention

2. Deadlock avoidance

3. Deadlock detection and recovery


Deadlock Prevention
• It is important to prevent a deadlock before it can
occur. So, the system checks each transaction before it
is executed to make sure it does not lead to deadlock.
• Some deadlock prevention schemes that use timestamps in
order to make sure that a deadlock does not occur are given
as follows
1. Wait - Die Scheme
2. Wound - Wait Scheme
Wait - Die Scheme
• In the wait - die scheme, if a transaction T1 requests for a
resource that is held by transaction T2, one of the following
two scenarios may occur −
• TS(T1) < TS(T2) - If T1 is older than T2 i.e T1 came in the system
earlier than T2, then it is allowed to wait for the resource which
will be free when T2 has completed its execution.
• TS(T1) > TS(T2) - If T1 is younger than T2 i.e T1 came in the
system after T2, then T1 is killed. It is restarted later with the same
timestamp.
Wound - Wait Scheme
• In the wound - wait scheme, if a transaction T1 requests for a
resource that is held by transaction T2, one of the following
two possibilities may occur −
• TS(T1) < TS(T2) - If T1 is older than T2 i.e T1 came in the system
earlier than T2, then it is allowed to roll back T2 or wound T2.
Then T1 takes the resource and completes its execution. T2 is later
restarted with the same timestamp.
• TS(T1) > TS(T2) - If T1 is younger than T2 i.e T1 came in the
system after T2, then it is allowed to wait for the resource which
will be free when T2 has completed its execution.
Deadlock Avoidance
• It is better to avoid a deadlock rather than take
measures after the deadlock has occurred.
• 1. Resource Allocation Graph Algorithm - single
instance of a resource type.
• 2. Banker’s Algorithm – several instances of a
resource type.
A state is safe if the system can allocate resources to each process in some
order and still avoid a dead lock.
Example

• Consider a system with twelve magnetic tape drives and three processes:
P0, P1, and P2.
• Process P0 requires ten tape drives,
• process P1 may need as many as four tape drives, and
• process P2 may need up to nine tape drives.
• Suppose that, at time t0, process P0 is holding five tape drives, process P1
is holding two tape drives, and process P2 is holding two tape drives.
(Thus, there are three free tape drives.)
• At time t0, the system is in a safe state. The sequence <P1, P0, P2>
satisfies the safety condition.
• Process P1 can immediately be allocated all its tape drives and then
return them (the system will then have five available tape drives);
then process P0 can get all its tape drives and return them (the
system will then have ten available tape drives); and finally process P2
can get all its tape drives and return them (the system will then have
all twelve tape drives available).
Resource allocation graph algorithm

 If no cycle exists, then the allocation of the resource will leave the
system in a safe state.
 If a cycle is found, then the allocation will put the system in an
unsafe state.
• Suppose that P2 requests R2. Although R2 is currently free, we cannot
allocate it to P2, since this action will create a cycle in the graph.
• A cycle, as mentioned, indicates that the system is in an unsafe state.
If P1 requests R2, and P2 requests R1, then a deadlock will occur

Resource-allocation graph for deadlock avoidance


unsafe state
Banker’s Algorithm
• Suppose there are n number of account holders in a
bank and the total sum of their money is S. If a person
applies for a loan then the bank first subtracts the
loan amount from the total money that bank has and
if the remaining amount is greater than S then only
the loan is sanctioned. It is done because if all the
account holders comes to withdraw their money then
the bank can easily do it.
• Similarly, When a new process enters the system, it must declare the
maximum number of instances of each resource type that it may need.
This number may not exceed the total number of resources in the
system.
1. Available. A vector of length m indicates the number of available
resources of each type.
2. Max. An n × m matrix defines the maximum demand of each
process.
3. Allocation. An n × m matrix defines the number of resources of
each type currently allocated to each process.
4. Need. An n × m matrix indicates the remaining resource need of
each process.

n is the number of processes in the system and m is the number of resource types
Banker’s Algorithm(con..)
Safety Algorithm
1. Let Work and Finish be vectors of length m and n, respectively.
Initialize Work = Available and Finish[i] = false for i = 0, 1, ..., n − 1.
2. Find an index i such that both
• Finish[i] == false
• Needi ≤ Work
If no such i exists, go to step 4.
3. Work = Work + Allocationi
• Finish[i] = true
• Go to step 2.
4. If Finish[i] == true for all i, then the system is in a safe state.
Banker’s Algorithm(con..)
Resource-Request Algorithm
1. If Requesti ≤ Needi , go to step 2. Otherwise, raise an error condition,
since the process has exceeded its maximum claim.
2. If Requesti ≤ Available, go to step 3. Otherwise, Pi must wait, since
the resources are not available.
3. Have the system pretend to have allocated the requested resources
to process Pi by modifying the state as follows:
• Available = Available–Requesti ;
• Allocationi = Allocationi + Requesti ;
• Needi = Needi –Requesti ;
Step 1: Need = Max – Allocation
Safe sequence:
1.For process P0, Need = (3, 2, 1) and
Available = (2, 1, 0)
Need <=Available = False
• So, the system will move to the next process.

2. For Process P1, Need = (1, 1, 0)


Available = (2, 1, 0)
Need <= Available = True
• Request of P1 is granted.

Available = Available +Allocation


= (2, 1, 0) + (2, 1, 2)
= (4, 2, 2) (New Available)
3. For Process P2, Need = (5, 0, 1)
Available = (4, 2, 2)
Need <=Available = False
So, the system will move to the next process.

4. For Process P3, Need = (7, 3, 3)


Available = (4, 2, 2)
Need <=Available = False
So, the system will move to the next process.
5. For Process P4, Need = (0, 0, 0)
Available = (4, 2, 2)
Need <= Available = True
Request of P4 is granted.
Available = Available + Allocation
= (4, 2, 2) + (1, 1, 2)
= (5, 3, 4) now, (New Available)
6. Now again check for Process P2, Need = (5, 0, 1)
Available = (5, 3, 4)
Need <= Available = True
Request of P2 is granted.
Available = Available + Allocation
= (5, 3, 4) + (4, 0, 1)
= (9, 3, 5) now, (New Available)
7. Now again check for Process P3, Need = (7, 3, 3)
Available = (9, 3, 5)
Need <=Available = True
The request for P3 is granted.
Available = Available +Allocation
= (9, 3, 5) + (0, 2, 0) = (9, 5, 5)
• 8. Now again check for Process P0, = Need (3, 2, 1)
• Available (9, 5, 5)
• Need <= Available = True
• So, the request will be granted to P0.
• Safe sequence: < P1, P4, P2, P3, P0>
Deadlock Detection
• An algorithm that examines the state of the system to determine
whether a deadlock has occurred
• An algorithm to recover from the deadlock
Deadlock Detection(con..)

Single Instance of Each Resource Type


• If all resources have only a single instance, then we can define a
deadlock detection algorithm that uses a variant of the resource-
allocation graph, called a wait-for graph.
• A deadlock exists in the system if and only if the wait-for graph
contains a cycle.
• To detect deadlocks, the system needs to maintain the wait for graph
and periodically invoke an algorithm that searches for a cycle in the
graph.
Deadlock Detection(con..)

Single Instance of Each Resource Type


Deadlock Detection(con..)

Several Instances of Each Resource Type


• The wait-for graph scheme is not applicable to a resource-allocation
system with multiple instances of each resource type.
• The algorithm employs several time-varying data structures that are
similar to those used in the banker’s algorithm
• Available. A vector of length m indicates the number of available
resources of each type.
• Allocation. An n × m matrix defines the number of resources of each
type currently allocated to each process.
• Request. An n × m matrix indicates the current request of each
process.
Deadlock Detection(con..)
Several Instances of Each Resource Type
1. Let Work and Finish be vectors of length m and n, respectively.
Initialize Work = Available and Finish[i] = false for i = 0, 1, ..., n − 1.
2. Find an index i such that both
• Finish[i] == false
• Requesti ≤ Work
If no such i exists, go to step 4.
3. Work = Work + Allocationi
• Finish[i] = true
• Go to step 2.
4. If Finish[i] == False for all i, then process Pi is deadlocked.
For example,
1. In this, Work = [0, 0, 0] &
Finish = [false, false, false, false, false]

2. i=0 is selected as both Finish[0] = false and [0, 0, 0]<=[0, 0, 0].


3. Work =[0, 0, 0]+[0, 1, 0] =>[0, 1, 0] &
Finish = [true, false, false, false, false].

4. i=2 is selected as both Finish[2] = false and [0, 0, 0]<=[0, 1, 0].


5. Work =[0, 1, 0]+[3, 0, 3] =>[3, 1, 3] &
Finish = [true, false, true, false, false].
6.i=1 is selected as both Finish[1] = false and [2, 0, 2]<=[3, 1, 3].
7.Work =[3, 1, 3]+[2, 0, 0] =>[5, 1, 3] &
Finish = [true, true, true, false, false].

8.i=3 is selected as both Finish[3] = false and [1, 0, 0]<=[5, 1, 3].


9.Work =[5, 1, 3]+[2, 1, 1] =>[7, 2, 4] &
Finish = [true, true, true, true, false].

10. i=4 is selected as both Finish[4] = false and [0, 0, 2]<=[7, 2, 4].
Work =[7, 2, 4]+[0, 0, 2] =>[7, 2, 6] &
Finish = [true, true, true, true, true].
the system is not in a deadlocked state
Safe sequence <P0, P2, P3, P1, P4>
• Suppose now that process P2 makes one additional request for an
instance of type C. The Request matrix is modified as follows:

• The system is now deadlocked.


• the number of available resources is not sufficient to fulfill the requests of
the other processes.
• Thus, a deadlock exists, consisting of processes P1, P2, P3, and P4
Recovery from Deadlock
• When a detection algorithm determines that a deadlock exists,
several alternatives are available.
• One possibility is to inform the operator that a deadlock has occurred
and to let the operator deal with the deadlock manually.
• Another possibility is to let the system recover from the deadlock
automatically.
• There are two options for breaking a deadlock.
1. One is simply to abort one or more processes to break the circular
wait.
2. The other is to preempt some resources from one or more of the
deadlocked processes.
Recovery from Deadlock(Con…)
Process Termination
To eliminate deadlocks by aborting a process, we use one of two
methods
1. Abort all deadlocked processes. This method clearly will break the
deadlock cycle, but at great expense. The deadlocked processes
may have computed for a long time, and the results of these partial
computations must be discarded and probably will have to be
recomputed later.
2. Abort one process at a time until the deadlock cycle is eliminated.
This method incurs considerable overhead, since after each process
is aborted, a deadlock-detection algorithm must be invoked to
determine whether any processes are still deadlocked.
Recovery from Deadlock(Con…)
Resource Preemption
• To eliminate deadlocks using resource preemption, we successively
preempt some resources from processes and give these resources to
other processes until the deadlock cycle is broken.
• If preemption is required to deal with deadlocks, then three issues
need to be addressed:
1. Selecting a victim. Which resources and which processes are to be
preempted?
2. Rollback. If we preempt a resource from a process, what should be
done with that process? We must roll back the process to some safe
state and restart it from that state.
Recovery from Deadlock(Con…)
3. Starvation. How do we ensure that starvation will not occur? That is,
how can we guarantee that resources will not always be preempted
from the same process?
we must ensure that a process can be picked as a victim only a (small)
finite number of times.
Recovery from deadlock

You might also like