0% found this document useful (0 votes)
37 views29 pages

Deadlocks PPT 3

The document discusses process synchronization in operating systems, focusing on cooperating processes that can affect each other, leading to potential data inconsistency. It explains concepts such as the critical section problem, semaphores, and the producer-consumer problem, emphasizing the need for synchronization to prevent race conditions and deadlocks. Solutions like Peterson's algorithm and semaphore types are introduced to manage access to shared resources effectively.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views29 pages

Deadlocks PPT 3

The document discusses process synchronization in operating systems, focusing on cooperating processes that can affect each other, leading to potential data inconsistency. It explains concepts such as the critical section problem, semaphores, and the producer-consumer problem, emphasizing the need for synchronization to prevent race conditions and deadlocks. Solutions like Peterson's algorithm and semaphore types are introduced to manage access to shared resources effectively.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd

Operating Systems

Course Instructor
Engr. Iraj Shah
Process Synchronization
•A cooperating process is one that can affect or be affected by other processes executing in the systems. So as we
know that there are multiple processes executing in the system. If the execution of one process can affect other
processes & vice versa, then we say that they are cooperating processes.
Cooperating Processes can either

Directly share a logical Or be allowed to share data


address space(that is both only through files or
code and data). messages.

• These are the two ways how sharing can take place between cooperating process.
• Problem: The main problem that we have here is that concurrent access to shared data may result in data
inconsistency. We know that cooperating processes may be sharing a region of data, so they will be able to
access that shared region and will be able to access and manipulate that shared data. Now if processes
concurrently try to manipulate data then that may result in data inconsistency. If two or more processes are
trying to manipulate the same data at the same time that will definitely lead to data inconsistency. So this is
where we need process synchronization.
• Example: Producer consumer problem: A producer produces information that is consumed by a consumer
process. For example a compiler may produce an assembly code that is consumed by an assembler. A n
assembler in turn may produce object modules which are consumed by the loader. One solution that we found
for this problem was shared memory. To allow producer and consumer processes run concurrently, we must
have a buffer that can be filled by producer and emptied by consumer. A producer can produce one item while a
consumer is consuming another item.
• The producer and consumer must be synchronized so that the consumer does not try to consume an item that
has not yet been produced. We also discussed two types of buffer:
1. Bounded Buffer: Practically places limit on the size of the buffer. The consumer must wait if the buffer is empty
and the producer must also wait if the buffer is full because we have a bounded buffer.
2. Unbounded Buffer: Places no limit on size of the buffer. The consumer may have to wait for new items but the
producer can always produce new items and it does not has to wait.
Bounder Buffer for Process Synchronization
• We know that we need to keep track of items that are present in the buffer because we need to know when the
buffer is full and we also need to know when the buffer is empty.
• Let’s say that we use a variable that is going to count the number of items that are present in the buffer. So that
means when something is added by producer the variable will be increased and when something is consumed by
consumer, the variable will decrease.
• Let’s say that name of variable is counter and in beginning we initialize the counter variable to 0. That means the
first value is zero. The counter is incremented every time we add a new item to the buffer “counter++”. And a
counter is decremented every time we remove one item from the buffer “counter- -”
• Counter variable =0
• Counter ++
• Counter - -
• Example: Suppose that value of counter is currently 5. The producer and consume execute the statements
“counter++” and “counter - -”. There might come a situation where producer and consumer execute
concurrently at the same time. That means when the value of counter was 5 then the producer wanted to
increment the counter by 1 and at the same time the consumer wants to decrement the counter by 1. What will
happen at this time? Following the execution of these two statements, the value of the variable counter may be
4, 5 or 6.
• Let’s say that counter variable was 5 and counter - - occurred first. So 5 will be decremented by 1 and would
become 4. Or let’s say that when its value was 5 and counter ++ occurred first. So 5 will be incremented by 1 and
would become 6.
• The actual value of the counter would be counter==5 because the initial value of the counter was 5 and when
one item was produced by the producer it became 6 and when it was consumed by consumer it became 5 again.
But will we have current result of 5 if the producer and the consumer both execute concurrently? Before
understanding, we need to understand how these counters work at machine level.
• Counter may be implemented on machine language as:
• register1=counter
• register1=register1+1
• Counter=register1
T0: Producer Execute Register1=counter {Register1=5}
T1: Producer Execute Register1=register1+1 {Register1=6}
T2 Consumer Execute Register2=counter {Register2=5}
T3: Consumer Execute Register2=register2-1 {Register2=4}
T4: Producer Execute Counter=register1 {Counter=6}
T5: Consumer Execute Counter=register2 {Counter=4}

• Now coming back to the producer and consumer working concurrently.


• So here both producer and consumer are executing concurrently and whoever gets the hold first executes. Now
we should have gotten 5 as final counter value but in table we got 6 or 4. This is because we arrived this
incorrect state because we allowed the counter to be manipulated by producer and consumer concurrently. If
we had allowed one of them to complete and then let the next one to do its work then we might have not
arrived this wrong situation.
• So a situation like this where several processes access and manipulate the data concurrently and the outcome of
the execution depends on the particular order in which the access takes place is called Race Condition.
• To overcome this we need process synchronization.
The Critical Section Problem
•Consider a system consisting of five processes {P0, P1,…Pn}.
•Each process has a segment of code called a Critical Section, in which the process may be changing common
variables, updating a table, writing a file and so on. That means whenever a process is accessing shared memory
and making some manipulations in that memory, then the code that is responsible for doing that particular
operation is known as critical section of the code.
•When process is executing in its critical section, no other process is to be allowed to execute in its critical section.
That is, not two processes are executing in their critical sections at the same time. In other words when one process
is accessing or manipulating the shared data, no other process will be allowed to access or manipulate the shared
data.
•So the critical section problem means to design a protocol in which the processes can be used to operate in such a
way that they will be synchronized with each other.
•There are certain rules that have to be followed by certain processes whenever they are accessing their critical
section or operating in their critical section.
[Link] process must request permission to enter its critical section.
[Link] section of the code implementing this request is the entry section. (The section of the code that will
implement request of the process to enter its critical section).
[Link] critical section is complete then it will be followed by an exit section.
[Link] remaining code is the remainder section.
• The following diagram shows the code structure of critical section.
• Here we have a do while loop inside which we have a critical section and
the remainder section. First we have an entry section , then we have a
critical section and then exit section that follows after the critical section.
The remaining portion is called the remainder section. This is the general
structure of a typical process we have.
• A solution to the critical section problem must satisfy the following three
requirements:
1. Mutual Exclusion: If process Pi is executing in its critical section then no
other processes can be executing in their critical sections.
2. Progress: If no process is executing in its critical section and some
processes wish to enter their critical section, then only those processes
that are not executing in their remainder sections can participate in the
decision making process. And what is the decision? We want to take the
decision of which among the processes that wish to enter the critical
section can actually enter the critical section. And this decision should not
be postponed.
3. Bounded Wait: We will have a limit on the number of time other processes are allowed to enter critical section
after one process has already made a request to enter its critical section and before that request is granted.
Peterson’s Solution
•A classic software based solution to the critical section problem. It may not work correctly on modern computer
architectures.
•However it provides a good algorithm description of solving critical section problem and illustrates some of the
complexities involved in designing software that addresses the requirements of mutual exclusion, progress and
bounded waiting requirements.
•This solution is restricted to two processes that alternate execution between their critical sections and remainder
sections. Let’s call the processes and Pi. Pj
• It requires two data items to be stored between two processes.
1. Int turn: Variable that indicates whose turn it is to enter the critical section. That means if turn is equal to I, then
it is process Pi’s turn to enter. If turn=j, then it is process Pj’s turn to enter the critical section.
2. Boolean flag[2]: It is an array which can store two variables and it is a Boolean variable i.e it can be either true or
false. It indicates a if a process is ready to enter its critical section. So when it is set either I or j, it will indicate if
these processes are ready to enter critical section or not.
• When a process wants to enter its critical section, it will set its flag=true. Let’s say Pj wants to enter, then it will
set flag[j]=true, indicating that Pj is ready to enter the queue.
• When Pj wants to enter its critical section, it will set turn variable equal to the other process. That means it will
set turn=i. Why did this happen? This is actually how Peterson’s solution works.
• It is like a humble algorithm. We call is a humble algorithm because when one process wants to enter its critical
section, instead of entering its critical section it gives the turn to another process. (Try to understand it with an
example of two friends entering a bus).
Boolean flag[2]
Int turn
Indicates if a process is
Indicates whose turn it is ready to enter its critical
to enter its critical section. section.

• In Pj, if process Pj wants to enter critical section, it


• Let’s say Pi wants to enter its critical section. It will sets its flag=true. Again it will try to be humble and
set flag[i]=true, which means Pi is ready to enter its will set the turn=i. Now if flag[i]=true and turn=I, then
critical section. Then it sets turn=j as we have the control will remain in this loop. So j will not go to
discussed above that it is a humble algorithm. Even the critical section but will be looping over here until
though it wants to enter its critical section, it sets the or unless one of the condition becomes false.
turn=j. That means it says I will wait.
•Now we have while loop which says that flag[j]= true
• After one of the condition turns false flag[j] will
and turn=j. This is the most important part. flag[j] is true
and also turn=j and these two statements are joined by execute in its critical section and after its completion
& operator. So when both conditions are true then while the remaining section will be exeuted.
loop will be true. If either one of them is false then
whole loop will break.
•So when both statements are true then the process will
be revolving in the loop and as soon as one of the
statement turns false then entire loop will break and
flag[i] will execute in its critical section.
•After its execution is complete, flag[i] will be equal to
false and the remainder section will be executed then.
• Let’s say that both the processes want to enter the critical section at the same time. Pi will execute it’s first
statement which says that Pi wants to enter its critical section. And Pj also sets flag[j]=true, which means that is
also wants to execute in its critical section. So Pi is going to be humble and will set turn=j. Pj will also be humble
and will set turn=i. The last set value will be the final value of the turn. So its turn was set tp j by process Pi but it
was also set to I by process Pj. So the final value of the turn is now i.
• Now in Pi flag[j]=true and turn=j. Now the value of turn in Pj is i which is the final value hence this condition in Pi
is false. Since one statement is false hence entire statement is false. So while loop will break and process will
come to the critical section portion and Pi will execute in critical section. Now Pi has completed the execution in
critical section and in Pj process the conditions have turned false now. So here loop will exit and Pj will start
execution in critical section.
Semaphores
•Semaphores are integer variables that are used to solve the critical section problem by using two atomic
operations, wait and signal that are used for process synchronization.
•Wait is denoted by ‘p’ which means to ‘test’.
•Signal is denoted by ‘v’ which means to ‘increment’.
Wait(): When semaphore S needs to be accessed using the wait operation by a process, whether the variable S is
less than or equal to zero. If semaphore variable is less than or equal to zero, then we see that there is a semicolon
which means that there are no operations under this while loop. But as long and the operation is true the control
will be stuck inside this while loop until the condition becomes false.
•If S is not lesser or equal to zero, it will break all of this while loop and will decrement the value of S. That is what
happens in the wait operation.
•S variable will take care of which process should enter the critical section. If S is less than or equal to zero, it means
that some process is already making use of the shared resource or is already executing in the critical section. So at
that time no other process can be allowed to enter the critical section or to use resources.
•So the process will check the condition. If it is true then it will be stuck here and will not be allowed to enter the
critical section.
Signal (): It just increments the value of S. It will be called when the process that was making the use of
semaphore to enter its critical section or use a resource, completes its operation and comes out of it.
Types of Semaphores
[Link] Semaphore: The value of semaphore can range only between 0 and 1. On some systems they are called
mutex locks as they are locks that provide mutual exclusion.
•No if the value is zero that means that the shared resource is being used by some other process and the currently
requesting process has to wait. Or some process is already using its critical section and the requesting process has
to wait.
•Value 1 means it is free for currently executing process to either enter its critical section or to use the shared
resource that it wants to use.
[Link] Semaphore: Its value can range over an unrestricted domain. It is used to control access to a resource
that has multiple instances.
•Let’s see we have several processes P1, P2 and P3 that are sharing a particular resource. Now let’s say shared
resource has multiple instances R1 and R2 so multiple processes can make use of it. S o P1 can use R1 and P2 can
use R2. But it is limited, the resource will not have indefinite or finite instances.
•In order to control that we make use of counting semaphore.
Disadvantages
•Requires Busy Waiting: While a process is in its critical section, any other process that tries to enter its critical
section must loop continuously in entry code. It wastes CPU cycles that other processes might be able to use
productively.
•Solution: We can overcome this by modifying wait() and signal() operations. Rather then engaging in busy waiting,
the process can block itself. It will stop getting stuck in while loop and will place that process in a queue called
waiting queue. So instead of getting stuck in the loop we are placing in to a waiting queue and then we are changing
the state of the process to waiting state.
Deadlocks
•It is a situation where two or more processes sharing the same resources are effectively preventing each other for
accessing the resources.
•In the diagram we can see that R1 resource is already allocated to P1, but P1 is requesting for resource R2. And R2
has been already allocated to P2 but P2 is requesting for R1 resource. This state is called Deadlock. Here P1 hold R1
and P2 hold R2. Until R1 is released by P1, P2 cannot get R1 and similarly until R2 is released by P2, P1 cannot get
R2.
Conditions for Deadlock (Deadlock Characterization)
•A deadlock may occur when there is:
[Link] Exclusion: It means only one process can use resource at a time. If another process requests for that
resource then it must be delayed until the resource has been released.

[Link] and Wait: It means a process is holding one resource and is waiting for another resource even though it has
not released the first resource. In the diagram given below, Process 2 holds Resource 2 and Resource 3 and is
requesting the Resource 1 which is held by Process 1.

[Link] Preemption: A resource cannot be preempted from a process by force. A process can only release a resource
voluntarily. In the diagram below, Process 2 cannot preempt Resource 1 from Process 1. It will only be released
when Process 1 relinquishes it voluntarily after its execution is complete.
4. Circular Wait: A process is waiting for the resource held by the second process, which is waiting for the
resource held by the third process and so on, till the last process is waiting for a resource held by the first
process. This forms a circular chain. For example: Process 1 is allocated Resource2 and it is requesting Resource
1. Similarly, Process 2 is allocated Resource 1 and it is requesting Resource 2. This forms a circular wait loop.
Methods for Handling Deadlock

[Link] Ignorance: The ostrich algorithm means that the deadlock is simply ignored and it is assumed that it will
never occur. This is done because in some systems the cost of handling the deadlock is much higher than simply
ignoring it as it occurs very rarely. So, it is simply assumed that the deadlock will never occur and the system is
rebooted if it occurs by any chance. Windows and Linux use the method and it is one of the most widely used
methods. Ex: When your system gets hanged, you simply shut down or restart your computer. This is deadlock
ignorance. It occurs once in a 5 or 10 years. Advantage of ignoring a deadlock is that the functionality of OS is not
increased and it will not affect the performance (speed mainly).
[Link] Prevention: It is important to prevent a deadlock before it can occur. So, the system checks each
transaction before it is executed to make sure it does not lead to deadlock. If there is even a slight possibility that a
transaction may lead to deadlock, it is never allowed to execute.
[Link] Avoidance: It is better to avoid a deadlock rather than take measures after the deadlock has occurred.
For this purpose we use Banker’s Algorithm.
[Link] Detection and Recovery: We first detect either there is a deadlock or not and when a deadlock is
detected, we then try to recover our system. Recovery includes:
•Kill the process or processes
•Resource Preemption
Banker’s Algorithm (Deadlock Avoidance)
•We call our system safe or unsafe.
•Safe means deadlock will not occur and the process will execute successfully. Unsafe means deadlock will occur
and the process will execute successfully.
•We also call it deadlock detection method.
Example #01: Banker’s Algorithm
Suppose you have five processes which are requesting for resource A, B and C. The maximum need of processes
is also shown in the table and the total number of resources is A=10, B=5 and C=7. Find the remaining need,
availability and determine if there will be any deadlock in the system?
Process Allocation Maximum Available Remaining
• Allocation means how many resources have Need Need
already been allocated to processes and to
which process they have been allocated. A B C A B C A B C A B C
• Max need means which process needs how
many resources. That is why we call it P1 0 1 0 7 5 3
deadlock avoidance because a process tells
P2 2 0 0 3 2 2
the system in the start that how many
resources it will be needing. P3 3 0 2 9 0 2
• Available means how many resources are
available after allocation. Available P4 2 1 1 4 2 2
resources= Total-Allocated Resources. P5 0 0 2 5 3 3
• Available resources= Total-Allocated Process Allocation Maximum Available Remaining
Resources. Need Need
• Remaining need= Max need-Allocation.
• Now for P1, total resources allocated to A A B C A B C A B C A B C
are 7, resources allocated to B are 2 and
resources allocated to C are 5. So available A P1 0 1 0 7 5 3 3 3 2 7 4 3
2 0 0
resources will be 10-7=3, available B
P2 2 0 0 3 2 2 5 3 2 1 2 2
resources will be 5-2=3 and available C 2 1 1
resources will be 7-3=5. Similarly for P3 3 0 2 9 0 2 7 4 3 6 0 0
remaining need of A for P1 will be 7-0=7, for
B will be 5-1=4 and for C will be 3-0=3. P4 2 1 1 4 2 2 7 7 5 2 1 1
• So we can complete the table by this
method. P5 0 0 2 5 3 3 7 5 5 5 3 1

• Now we have to check if we can fulfil the remaining need according to the availability of each process. In case of
P1 the max need of the process is 7 5 3 and the availability of the resources is 3 3 2, hence P1 will not execute
so we will move towards P2.
• In P2 remianing need is 1 2 2 and availability is 5 3 2. Hence P2 will execute successfully and will terminate,
releasing the allocated resources. These released resources will be added in to the current availability.
• Now the current availability will be 5 3 2. P3’s remaining needs cannot be fulfilled as the current availability is
lesser than the need hence we will move towards P4.
Process Allocation Maximum Available Remaining
Need Need

A B C A B C A B C A B C
P1 0 1 0 7 5 3 3 3 2 7 4 3
2 0 0

P2 2 0 0 3 2 2 5 3 2 1 2 2
2 1 1

P3 3 0 2 9 0 2 7 4 3 6 0 0
0 0 2

P4 2 1 1 4 2 2 7 7 5 2 1 1
0 1 0

P5 0 0 2 5 3 3 7 5 5 5 3 1
3 0 2
• Now availability will be 7 4 3. Now P5 will be executed as remaining needs are lesser than the availability. It
will terminate and release allocated resources.
• Now available resources will be 7 7 5. Now P1 will execute terminating itself and releasing the allocated
resources. So current availability will be 7 5 5.
• No at last P3 will execute as the remaining need is lesser than the availability. It will also terminate and will
release the resources.
• So final current availability will be 10 5 7. So all the processes are executed successfully hence there will be no
deadlock is the system. And the sequence of execution will be P2 P4 P5 P1 P3. If the remaining need
of any of the processes is not fulfilled, it means that there is a deadlock in your system.
Example 2
Find is there is any deadlock in the system or not? Determine sequence of the execution of processes and current
availability.

Process Allocation Maximum Available Remaining


Need Need

E F G E F G E F G E F G
P0 1 0 1 4 3 1 3 3 0

P1 1 1 2 2 1 4

P2 1 0 3 1 3 3

P3 2 0 0 5 4 1
Example 3
Find is there is any deadlock in the system or not? Determine sequence of the execution of processes and current
availability.

P Allocation Maximum Available Remaining


Need Need

A B C D A B C D A B C D A B C D

P0 0 0 1 2 0 0 1 2 1 5 2 0
P1 1 0 0 0 1 7 5 0
P2 1 3 5 4 2 3 5 6

P3 0 6 3 2 0 6 5 2
P4 0 0 1 4 0 6 5 2
Example 4
Find is there is any deadlock in the system or not? Determine sequence of the execution of processes and current
availability.

P Allocation Maximum Available Remaining


Need Need

A B C D A B C D A B C D A B C D

P0 0 1 1 0 0 2 1 0 1 3 1 0
P1 1 4 4 1 1 6 5 2
P2 1 3 6 5 2 3 6 6

P3 0 6 3 2 0 6 5 2
P4 0 0 1 4 0 6 5 6
Example 5
Find is there is any deadlock in the system or not? Determine sequence of the execution of processes and current
availability.

Process Allocation Maximum Available Remaining


Need Need

A B C A B C A B C A B C

P1 1 0 1 2 1 1 2 1 1

P2 2 1 2 5 4 4

P3 3 0 0 3 1 1

P4 1 0 1 1 1 1
Resource Allocation (Single Instance)
•Another efficient and convenient way to represent state of the system. State means to represent either there is a
deadlock or not.
•Every graph has two things; edge and vertex.
•There are two types or vertices; Process and Resource vertex.
• This means all the processes running in our system are
represented by a vertex and they are generally represented
with a circle. And all the resources present in or system are
represented by a vertex which in a rectangle.
• Resources are of two types; Single Instance and Multi-
Instance. This means we can have a resource that is only one
in our system i.e CPU, monitor etc. And some of these are
multiple like registers, printer etc.
• Edges represent two things; Assign edge and Request edge.
Assign edge means we have a resource which is connected
to a process and if the edge/arrow is moving towards the
process it means that resource has been allocated to the
process. If the edge/arrow moves towards resource, it
means that process is requesting for that resource.
•We have two processes P1 and P2.
•It is a single instance which means only one instance is there for both R1 and R2. So we can suppose that R1 is CPU
and R2 is a monitor.
•P1 has taken R1 but it is requesting for R2 because edge is moving towards the resource.P2 has taken R2 as the
edge is moving from the resource towards the process.
• But P2 is requesting R1 as the arrow is moving towards R1.
• Now why do we represent it like this? The main reason is
that we want to check either there is a deadlock or not.
• Let’s examine it now. You can see that R1 is taken by P1, R2
is requested by P1, R2 is taken by P2 and P2 is requesting R1.
So it is a circular wait. Circular wait is one of the necessary
condition for deadlock. So we can see P1 says that I have R1
but I also need R2 to get executed so P1 will not execute.
• Similarly P2 holds R2 and it also says that it needs R1 for its
execution. So P2 will also not execute. So there will be deadlock
is the system. But there is another method to determine either R1 R2 R1 R2
there is a deadlock or not which is used for complexed graphs.
• In 2nd method we first check how many resources are P1 and P2
holding. We will write them in allocated column. P1 holds R1
and P2 holds R2 so we write 1 below R1 and R2. Now P1 is
requesting for R2 and P1 is requesting for R1. So we write 1
• Availability=(0,0)
below R2 and R1 for P2 and P1. So there will be deadlock.
P1
P1 P2
P2 P3
P3
Example 2
Find if there will be ay deadlock in the system? What will be the
final availability of the system? Is the process shown in the figure
cyclic or acyclic?
Pr Allocation Availability Request
oc
ess
R1 R2 R1 R2 R1 R2
R1
R1 R2
R2
• Availability is (0 0) and request of P1 for R1 and R2 is
P1 1 0 0 0 0 0
also (0 0). Hence P1 will execute and terminate 1 0
successfully and will release the allocated resources
which will add in to the current available resources. P2 0 1 1 0 0 0
• Now availability is (1 0) and request of P2 for R1 and 0 1
R2 is (0 0). Hence P2 will execute and terminate P3 0 0 1 1 1 1
successfully and will release the allocated resources 0 0
which will add in to the current available resources.
• Availability is (1 1) and request of P3 for R1 and R2 is also (2 1). Hence P3 will execute and terminate successfully
and will release the allocated resources which will add in to the current available resources i.e (1 1).
• So there will be no deadlock in the system.
• Note: In case of single instance, if RAG has circular wait (cyclic) then there will always be a deadlock. If it is Acyclic,
there may not always be a deadlock.
Example 3: Multi-Instance RAG
R1
R1 Find if there will be ay deadlock in the system? What will be the
final availability of the system? Is the process shown in the figure
cyclic or acyclic?
P1
P1 P2
P2 Pr Allocation Availability Request
oc
ess
R1 R2 R1 R2 R1 R2
P3
P3
R2
R2 P1 1 0 0 0 0 1
0 1

• P3 will execute and terminate successfully P2 0 1 0 1 1 0


releasing the resources allocated. 1 0
• P1 will execute and terminate successfully
P3 0 1 1 1 0 0
releasing the resources allocated.
0 1
• At last P2 will execute and terminate successfully
releasing the resources allocated.
• Current availability will be (1 2) and there will be
no deadlock in the system.
Example 3: Multi-Instance RAG
R1
R1 R2
R2 Find if there will be ay deadlock in the system? What will be the
final availability of the system? Is the process shown in the figure
cyclic or acyclic?
Pr Allocation Availability Request
oc
P1
P1 P0
P0 P2
P2 P3
P3 ess
R1 R2 R3 R1 R2 R3 R1 R2 R3

P0

P1
R3
R3

P2

P3

P4

You might also like