0% found this document useful (0 votes)
11 views

UNIT-3 Full

Uploaded by

chandubizgurukul
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views

UNIT-3 Full

Uploaded by

chandubizgurukul
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 51

Unit-3 9hours

Process Synchronization: Synchronization Tools, Background, The Critical-Section Problem,


Peterson’s Solution, Hardware Support for Synchronization, Mutex Locks, Semaphores, Monitors,
Synchronization Examples: Classic Problems of Synchronization.

Deadlocks: System Model, Deadlock Characterization, Methods for Handling Deadlocks, Deadlock
Prevention, Deadlock Avoidance, Deadlock Detection, Recovery from Deadlock.

VINUTHA M S, DEPT OF CSE,Dr AIT 1


RACE CONDITION
Let's understand Producer and Consumer code
Before Starting an explanation of code, first, understand the few
terms used in the above code:

1. "in" used in a producer code represent the next empty buffer


2. "out" used in consumer code represent first filled buffer
3. count keeps the count number of elements in the buffer
4. count is further divided into 3 lines code represented in the
block in both the producer and consumer code.

If we talk about Producer code first:


• --Rp is a register which keeps the value of m[count]
• --Rp is incremented (As element has been added to buffer)
• --an Incremented value of Rp is stored back to m[count]

Similarly, if we talk about Consumer code next:


• --Rc is a register which keeps the value of m[count]
• --Rc is decremented (As element has been removed out of
buffer)
• --the decremented value of Rc is stored back to m[count].

VINUTHA M S, DEPT OF CSE,Dr AIT 2


• Buffer[in] = itemP → Buffer[5] = F. ( F is inserted
now)
• in = (in + 1) mod n → (5 + 1)mod 8→ 6, therefore in
= 6; (next empty buffer)
• After insertion of F, Buffer looks like this
• Where out = 0, but in = 6

• As we can see from Fig: Buffer has total 8 spaces out of which
the first 5 are filled, in = 5(pointing next empty position) and
out = 0(pointing first filled position). • Since count = count + 1; is divided into three
parts:
• Let's start with the producer who wanted to produce an
element " F ", according to code it will enter into the • Load Rp, m[count] → will copy count value which is
producer() function, while(1) will always be true, itemP = F will 5 to register Rp.
be tried to insert into the buffer, before that while(count == n); • Increment Rp → will increment Rp to 6.
will evaluate to be False.
• Suppose just after Increment and before the
execution of third line (store m[count], Rp) Context
Switch occurs and code jumps to consumer code. .
.
VINUTHA M S, DEPT OF CSE,Dr AIT 3
Consumer Code: • Since count = count - 1; is divided into three parts:
• Now starting consumer who wanted to consume the first • Load Rc, m[count] → will copy count value which is
element " A ", according to code it will enter into the 5 to register Rp.
consumer() function, while(1) will always be true,
while(count == 0); will evaluate to be False( since the • Decrement Rc → will decrement Rc to 4.
count is still 5, which is not equal to 0. • store m[count], Rc → count = 4.
• itemC = Buffer[out]→ itemC = A ( since out is 0) • Now the current value of count is 4
• out = (out + 1) mod n → (0 + 1)mod 8→ 1, therefore out = • Suppose after this Context Switch occurs back to
1( first filled position) the leftover part of producer code. . .
• A is removed now • Since context switch at producer code was occurred
• After removal of A, Buffer look like this after Increment and before the execution of the third
line (store m[count], Rp)
• Where out = 1, and in = 6
• So we resume from here since Rp holds 6 as
incremented value
• Hence store m[count], Rp → count = 6
• Now the current value of count is 6, which is
wrong as Buffer has only 5 elements, this
condition is known as Race Condition and
Problem is Producer-Consumer Problem.

VINUTHA M S, DEPT OF CSE,Dr AIT 4


• Situations such as the one just described occur frequently
in operating systems as different parts of the system
manipulate resources.
• We would arrive at this incorrect state because we
allowed both processes to manipulate the variable count
concurrently.
• A situation like this, where several processes access and
manipulate the same data concurrently and the outcome
of the execution depends on the particular order in which
the access takes place, is called a race condition.
• To guard against the race condition above, we need to
ensure that only one process at a time can be
manipulating the variable count. To make such a
guarantee, we require that the processes be synchronized
in some way

VINUTHA M S, DEPT OF CSE,Dr AIT 5


VINUTHA M S, DEPT OF CSE,Dr AIT 6
VINUTHA M S, DEPT OF CSE,Dr AIT 7
VINUTHA M S, DEPT OF CSE,Dr AIT 8
VINUTHA M S, DEPT OF CSE,Dr AIT 9
Hardware support for Synchronization

VINUTHA M S, DEPT OF CSE,Dr AIT 10


Mutex Locks
• The hardware-based solutions to the critical-section
problem presented in Section are complicated as well as
generally inaccessible to application programmers.
• Instead, operating-system designers build higher-level
software tools to solve the critical-section problem.
• The simplest of these tools is the mutex lock. (In fact, the
term mutex is short for mutual exclusion.) We use the
mutex lock to protect critical sections and thus prevent
race conditions.
• That is, a process must acquire the lock before entering a
critical section; it releases the lock when it exits the
critical section.
• The acquire()function acquires the lock, and the release()
function releases the lock, as illustrated in Figure.
• A mutex lock has a boolean variable available whose
value indicates if the lock is available or not.
• If the lock is available, a call to acquire() succeeds, and
the lock is then considered unavailable. A process that
attempts to acquire an unavailable lock is blocked until
the lock is released

VINUTHA M S, DEPT OF CSE,Dr AIT 11


Advantages of Mutex: Here are the following advantages of the mutex, such as:
• Mutex is just simple locks obtained before entering its critical section and then releasing it.
• Since only one thread is in its critical section at any given time, there are no race conditions,
and data always remain consistent.

Disadvantages of Mutex: Mutex also has some disadvantages, such as:


• If a thread obtains a lock and goes to sleep or is preempted, then the other thread may not
move forward. This may lead to starvation.
• It can't be locked or unlocked from a different context than the one that acquired it.
• Only one thread should be allowed in the critical section at a time.
• The normal implementation may lead to a busy waiting state, which wastes CPU time.

VINUTHA M S, DEPT OF CSE,Dr AIT 12


Semaphores
Mutex locks, as we mentioned earlier, are generally considered the simplest of synchronization tools. In this section, we
examine a more robust tool that can behave similarly to a mutex lock but can also provide more sophisticated ways for
processes to synchronize their activities.

VINUTHA M S, DEPT OF CSE,Dr AIT 13


VINUTHA M S, DEPT OF CSE,Dr AIT 14
Advantages of Semaphores Disadvantage of Semaphores
• They allow processes into the critical section one by one • We have already discussed the advantages of
and provide strict mutual exclusion (in the case of binary semaphores, however, semaphores also have some
semaphores). disadvantages. They are:
• No resources go to waste due to busy waiting as with the • Semaphores are slightly complicated and the
usage of semaphores as we do not waste processor time implementation of the wait and signal operations
in checking the fulfillment of a condition to allow a should be done in such a manner,
process to access the critical section. that deadlocks are prevented.
• The implementation/code of the semaphores is written in • The usage of semaphores may cause priority
the machine-independent code section of the inversion where the high-priority processes might get
microkernel, and hence semaphores are machine access to the critical section after the low-priority
independent. processes.

VINUTHA M S, DEPT OF CSE,Dr AIT 15


VINUTHA M S, DEPT OF CSE,Dr AIT 16
• Advantages of Monitor: Monitors have the
advantage of making parallel programming easier
and less error prone than using techniques such as
semaphore.
• Disadvantages of Monitor: Monitors have to be
implemented as part of the programming language .
The compiler must generate code for them. This
gives the compiler the additional burden of having to
know what operating system facilities are available to
control access to critical sections in concurrent
processes. Some languages that do support
monitors

VINUTHA M S, DEPT OF CSE,Dr AIT 17


Classical Problems of Synchronization
1. Bounded Buffer Problem • Similarly, when a consumer wants to consume data, it waits on
• The bounded buffer problem is a classic problem in operating the counting semaphore. If there is data in the buffer, the
system design and concurrent programming. It is also known as consumer can read it, and the semaphore is incremented. If the
the producer-consumer problem. The problem involves two types buffer is empty, the consumer is blocked until a producer
of processes, producers, and consumers, which share a fixed-size produces some data and the semaphore is decremented.
buffer.
• By using these synchronization mechanisms, we can ensure that
• The goal of producers is to produce data and store it in the buffer, producers and consumers access the buffer in a mutually
while the consumers’ task is to retrieve data from the buffer and exclusive manner and avoid race conditions and synchronization
process it. The problem arises when producers and consumers issues.
run concurrently and access the buffer simultaneously, which may
lead to race conditions and synchronization issues.

• To solve the bounded buffer problem, synchronization


mechanisms such as semaphores, mutexes, and condition
variables are used. One common solution to the problem is to use
a counting semaphore to keep track of the number of available
slots in the buffer.

• When a producer wants to produce data, it first waits on the


counting semaphore. If there are available slots in the buffer, the
producer can write to the buffer, and the semaphore is
decremented. If the buffer is full, the producer is blocked until a
consumer consumes some data and the semaphore is
incremented.

VINUTHA M S, DEPT OF CSE,Dr AIT 18


The Dining Philosophers

• The Dining Philosophers Problem is a classic synchronization


problem in computer science and operating systems The problem
• The Dining Philosophers Problem can be solved using various
was first presented by Edsger Dijkstra in 1965.
synchronization techniques, such as semaphores or mutexes. One
solution involves using a global resource, such as a waiter, to
• The problem is as follows: there are five philosophers sitting at a
arbitrate access to the forks. Another solution involves allowing
round table. Each philosopher has a plate of spaghetti in front of
only a certain number of philosophers to pick up their left fork at
them, and a fork on each side of their plate. The philosophers
the same time, thus avoiding the deadlock.
alternate between thinking and eating. In order to eat, a
philosopher must have both forks next to them. However, only
• The Dining Philosophers Problem is a fundamental problem in
one fork can be picked up at a time, and a philosopher cannot
concurrent programming and illustrates the importance of
start eating until they have both forks.
synchronization and deadlock avoidance in operating systems

• The problem arises when all of the philosophers pick up their left
fork at the same time, and then all try to pick up their right fork at
the same time. This results in a deadlock, where no philosopher
can start eating.

VINUTHA M S, DEPT OF CSE,Dr AIT 19


The readers-writers problem
• However, multiple readers can access the object at the
• The readers-writers problem is a classical problem of process same time.
synchronization, it relates to a data set such as a file that is shared
• Let us understand the possibility of reading and writing
between more than one process at a time. Among these various with the table given below:
processes, some are Readers - which can only read the data set;
they do not perform any updates, some are Writers - can both
read and write in the data sets.
• The readers-writers problem is used for managing
synchronization among various reader and writer process so that
there are no problems with the data sets, i.e. no inconsistency is
generated.
• Let's understand with an example - If two or more than two
readers want to access the file at the same point in time there
will be no problem. However, in other situations like when two • The solution of readers and writers can be implemented using
writers or one reader and one writer wants to access the file at binary semaphores.
the same point of time, there may occur some problems, hence
the task is to design the code in such a manner that if one reader
is reading then no writer is allowed to update at the same point
of time, similarly, if one writer is writing no reader is allowed to
read the file at that point of time and if one writer is updating a
file other writers should not be allowed to update the file at the
same point of time. VINUTHA M S, DEPT OF CSE,Dr AIT 20
Deadlocks System Model
In a multiprogramming environment, several threads may compete for a • A system consists of a finite number of resources to be distributed
finite number of resources. A thread requests resources; if the resources among a number of competing threads. The resources may be
are not available at that time, the thread enters a waiting state. partitioned into several types (or classes), each consisting of some
number of identical instances.
Sometimes, a waiting thread can never again change state, because the
resources it has requested are held by other waiting threads. This • CPU cycles, files, and I/O devices (such as network interfaces and DVD
situation is called a deadlock. drives) are examples of resource types. If a thread requests an instance
of a resource type, the allocation of any instance of the type should
satisfy the request. If it does not, then the instances are not identical,
Consider an example when two trains are coming toward each other on and the resource type classes have not been defined properly
the same track and there is only one track, none of the trains can move • A thread must request a resource before using it and must release the
once they are in front of each other. resource after using it. A thread may request as many resources as it
requires to carry out its designated task. Obviously, the number of
A similar situation occurs in operating systems when there are two or resources requested may not exceed the total number of resources
more processes that hold some resources and wait for resources held by available in the system. In other words, a thread cannot request two
network interfaces if the system has only one.
other(s). For example, in the below diagram, Process 1 is holding
Resource 1 and waiting for resource 2 which is acquired by process 2, • Under the normal mode of operation, a thread may utilize a resource in
only the following sequence:
and process 2 is waiting for resource 1.
1. Request: The thread requests the resource. If the request cannot be
granted immediately (for example, if a mutex lock is currently held by
another thread), then the requesting thread must wait until it can acquire
the resource.
2. Use: The thread can operate on the resource (for example, if the resource
is a mutex lock, the thread can access its critical section).
3. Release: The thread releases the resource.

VINUTHA M S, DEPT OF CSE,Dr AIT 21


Deadlock Characterization
A deadlock happens in operating system when two or more processes need
some resource to complete their execution that is held by the other Hold and wait
process. A deadlock situation can arise if the following four conditions hold
simultaneously in a system • A thread must be holding at least one resource and waiting to
acquire additional resources that are currently being held by
Mutual Exclusion other threads.
• At least one resource must be held in a non-sharable mode; that is, only • A process can hold multiple resources and still request more
one thread at a time can use the resource. If another thread requests resources from other processes which are holding them. In the
that resource, the requesting thread must be delayed until the resource diagram given below, Process 2 holds Resource 2 and Resource 3
has been released. and is requesting the Resource 1 which is held by Process 1.
• There should be a resource that can only be held by one process at a
time. In the diagram below, there is a single instance of Resource 1 and it
is held by Process 1 only.
• The mutual exclusion condition prevents two or more processes to
access the same resource at a time.

VINUTHA M S, DEPT OF CSE,Dr AIT 22


3. No preemption 4. Circular wait
• Resources cannot be preempted; that is, a resource can be • A set {T0, T1, ..., Tn} of waiting threads must exist such that T0
released only voluntarily by the thread holding it, after that is waiting for a resource held by T1, T1 is waiting for a resource
thread has completed its task. held by T2, ..., Tn−1 is waiting for a resource held by Tn, and Tn is
waiting for a resource held by T0.
• A resource cannot be preempted from a process by force. A
process can only release a resource voluntarily. In the diagram • A process is waiting for the resource held by the second process,
below, Process 2 cannot preempt Resource 1 from Process 1. It which is waiting for the resource held by the third process and so
will only be released when Process 1 relinquishes it voluntarily on, till the last process is waiting for a resource held by the first
after its execution is complete. process. This forms a circular chain. For example: Process 1 is
allocated Resource2 and it is requesting Resource 1. Similarly,
Process 2 is allocated Resource 1 and it is requesting Resource 2.
This forms a circular wait loop.

VINUTHA M S, DEPT OF CSE,Dr AIT 23


Resource-Allocation Graph
• Deadlocks can be described more precisely in terms of a directed • When thread Ti requests an instance of resource type Rj , a request
graph called a system resource-allocation graph. This graph edge is inserted in the resource-allocation graph. When this
consists of a set of vertices V and a set of edges E. request can be fulfilled, the request edge is instantaneously
• The set of vertices V is partitioned into two different types of transformed to an assignment edge. When the thread no longer
nodes: needs access to the resource, it releases the resource. As a result,
the assignment edge is deleted.
 T = {T1, T2, ..., Tn}, the set consisting of all the active threads in
the system, and
 R = {R1, R2, ..., Rm}, the set consisting of all resource types in the
system.
• A directed edge from thread Ti to resource type Rj is denoted by
Ti → Rj; it signifies that thread Ti has requested an instance of
resource type Rj and is currently waiting for that resource. A
directed edge from resource type Rj to thread Ti is denoted by Rj
→ Ti ; it signifies that an instance of resource type Rj has been
allocated to thread Ti .
• A directed edge Ti → Rj is called a request edge; a directed edge
Rj → Ti is called an assignment edge.
• Pictorially, we represent each thread Ti as a circle and each
resource type Rj as a rectangle. Since resource type Rj may have
more than one instance, we represent each such instance as a dot
within the rectangle.
• Note that a request edge points only to the rectangle Rj , whereas
an assignment edge must also designate one of the dots in the
rectangle.

VINUTHA M S, DEPT OF CSE,Dr AIT 24


The resource-allocation graph shown in Figure depicts the following
situation.
The sets T, R, and E:
• T = {T1, T2, T3}
• R = {R1, R2, R3, R4}
• E = {T1 → R1, T2 → R3, R1 → T2, R2 → T2, R2 → T1, R3 → T3}
Resource instances:
 One instance of resource type R1
 Two instances of resource type R2
 One instance of resource type R3
 Three instances of resource type R4
Thread states:
 Thread T1 is holding an instance of resource type R2 and is waiting Given the definition of a resource-allocation graph, it can be
for an instance of resource type R1. shown that, if the graph contains no cycles, then no thread in the
 Thread T2 is holding an instance of R1 and an instance of R2 and is system is deadlocked. If the graph does contain a cycle, then a
waiting for an instance of R3. deadlock may exist
 Thread T3 is holding an instance of R3.

VINUTHA M S, DEPT OF CSE,Dr AIT 25


Suppose that thread T3 requests an instance of resource type R2.
Since no resource instance is currently available, we add a request
edge T3 → R2 to the graph. Now consider the resource-allocation graph below . In this example,
we also haveacycle:
At this point, two minimal cycles exist in the system: T1→R1→T3→R2→T1
• T1 → R1 → T2 → R3 → T3 → R2 → T1
• T2 → R3 → T3 → R2 → T2
Threads T1, T2, and T3 are deadlocked.
Thread T2 is waiting for the resource R3, which is held by thread T3.
Thread T3 is waiting for either thread T1 or thread T2 to release
resource R2.
In addition, thread T1 is waiting for thread T2 to release resource R1

However, there is no deadlock. Observe that thread T4 may


release its instance of resource type R2. That resource can then be
allocated to T3, breaking the cycle.

In summary, if a resource-allocation graph does not have a cycle,


then the system is not in a deadlocked state. If there is a cycle,
then the system may or may not be in a deadlocked state.

VINUTHA M S, DEPT OF CSE,Dr AIT 26


Methods for Handling Dead locks
2. Deadlock prevention
• Deadlock happens only when Mutual Exclusion, hold and
1. Deadlock Ignorance wait, No preemption and circular wait holds simultaneously. If
• Deadlock Ignorance is the most widely used approach among it is possible to violate one of the four conditions at any time
all the mechanism. This is being used by many operating then the deadlock can never occur in the system.
systems mainly for end user uses. In this approach, the • The idea behind the approach is very simple that we have to fail
Operating system assumes that deadlock never occurs. It simply one of the four conditions but there can be a big argument on
ignores deadlock. This approach is best suitable for a single its physical implementation in the system.
end user system where User uses the system only for
browsing and all other normal stuff. 3. Deadlock avoidance
• There is always a tradeoff between Correctness and • In deadlock avoidance, the operating system checks whether
performance. The operating systems like Windows and Linux the system is in safe state or in unsafe state at every step
mainly focus upon performance. However, the performance of which the operating system performs. The process continues
the system decreases if it uses deadlock handling mechanism until the system is in safe state. Once the system moves to
all the time if deadlock happens 1 out of 100 times then it is unsafe state, the OS has to backtrack one step.
completely unnecessary to use the deadlock handling • In simple words, The OS reviews each allocation so that the
mechanism all the time. allocation doesn't cause the deadlock in the system.
• In these types of systems, the user has to simply restart the
4. Deadlock detection and recovery
computer in the case of deadlock. Windows and Linux are
• This approach let the processes fall in deadlock and
mainly using this approach.
then periodically check whether deadlock occur in the
system or not. If it occurs then it applies some of the
recovery methods to the system to get rid of
VINUTHA M S, DEPT OF deadlock.
CSE,Dr AIT 27
Eliminate Mutual Exclusion: It is not possible to dis-satisfy the mutual
Deadlock Prevention exclusion because some resources, such as the tape drive and printer, are
• Deadlock prevention is a technique used in computer science to avoid inherently non-shareable.
situations where multiple processes or threads are blocked and unable Mutual section from the resource point of view is the fact that a resource
to proceed because they are waiting for each other to release can never be used by more than one process simultaneously which is fair
resources that they need to complete their tasks. enough but that is the main reason behind the deadlock. If a resource
• The main goal of deadlock prevention is to ensure that resources are could have been used by more than one process at the same time then
used efficiently and that situations where a deadlock may occur are the process would have never been waiting for any resource.
avoided. By implementing appropriate techniques for preventing However, if we can be able to violate resources behaving in the mutually
deadlocks, computer systems can improve their efficiency, reduce exclusive manner then the deadlock can be prevented.
delays, and avoid the risk of system failure caused by deadlocks.
Eliminate Hold and wait: Allocate all required resources to the
• Deadlock prevention techniques may include resource allocation process before the start of its execution, this way hold and wait
graph, preemptive scheduling, detection and recovery, or a condition is eliminated but it will lead to low device utilization.
combination of these methods. We can prevent a Deadlock by
eliminating any of the above four conditions. for example, if a process requires a printer at a later time and we have
allocated a printer before the start of its execution printer will remain
blocked till it has completed its execution. The process will make a new
request for resources after releasing the current set of resources. This
solution may lead to starvation.

VINUTHA M S, DEPT OF CSE,Dr AIT 28


• Eliminate No Preemption :
 Preempt resources from the process when resources are required by other high-priority processes. Deadlock arises due to the fact that
a process can't be stopped once it starts.
 However, if we take the resource away from the process which is causing deadlock then we can prevent deadlock.
 If a thread is holding some resources and requests another resource that cannot be immediately allocated to it (that is, the thread
must wait), then all resources the thread is currently holding are preempted. In other words, these resources are implicitly released.
 The preempted resources are added to the list of resources for which the thread is waiting.

• Eliminate Circular Wait:


 To violate circular wait, we can assign a priority number to each of the resource.
 A process can't request for a lesser priority resource.
 This ensures that not a single process can request a resource which is being utilized by some other process and no cycle will be
formed. A process can request the resources to increase/decrease order of numbering.
 For Example, if the P1 process is allocated R5 resources, now next time if P1 asks for R4, R3 lesser than R5 such a request will not be
granted, only a request for resources more than R5 will be granted.

VINUTHA M S, DEPT OF CSE,Dr AIT 29


Deadlock Avoidance • Safe State
A state is safe if the system can allocate resources to each thread (up to its maximum)
An alternative method for avoiding deadlocks is to require in some order and still avoid a deadlock. More formally, a system is in a safe state only
additional information about how resources are to be requested if there exists a safe sequence.
For example, in a system with resources R1 and R2, the system
To illustrate, consider a system with twelve resources and three threads: T1, T2, and T3.
might need to know that thread P will request first R1 and then R2 Thread T1 requires ten resources, thread T2 may need as many as four, and thread T3
before releasing both resources, whereas thread Q will request R2 may need up to nine resources. Suppose that, at time t0, thread T1 is holding five
and then R1. resources, thread T2 is holding two resources, and thread T3 is holding two resources.

With this knowledge of the complete sequence of requests and


releases for each thread, the system can decide for each request Ti Max Curre Rema
whether or not the thread should wait in order to avoid a possible Nee nt ining Ti Max Current Ti Max Current
d Need Need Need Need Need Need
future deadlock.
T1 10 5 5 T1 10 5 T1 10 0
Each request requires that in making this decision the system
consider the resources currently available, the resources currently T2 4 2 2 T2 4 0 T2 4 0
allocated to each thread, and the future requests and releases of
each thread. T3 9 2 7 T3 9 2 T3 9 2
9
The simplest and most useful model requires that each thread
declare the maximum number of resources of each type that it
may need. FREE =12-9(current need)=3 FREE=1+4[released by T2)=5 FREE=10(released by T1)
Assign 2 to T2 [FREE=1] Assign 5 to T1 Assign 7 to T3
A deadlock avoidance algorithm dynamically examines the
T2=4; Execution completed. T1=10; Execution Completed T2 execution done
resource-allocation state to ensure that a circular-wait condition
can never exist. The resource-allocation state is defined by the
number of available and allocated resources and the maximum Sequence <T2,T1,T3> results safe state
VINUTHA M S, DEPT OF CSE,Dr AIT 30
demands of the threads.
Ti Max Current Remain
Ti Max Need Current Remaining • Ti Need Need ing
Need Need Ti • MaxCurrent
Max Need Need
Need Need
T1 10 5 5 T1 10 5 5
T1 10 5
T2 4 2 2 T2 4 0 0
T2 4 2
T3 9 2 7 T3 9 2 6
T3 9 2
FREE=3 FREE=2 FREE=4(T2 released)
Allocate 1 to T3 Allocate 2 to T2; T1 Requires 5 available 4
Execution is completed T3 Requires 6 available 4

T1 & T2 cannot execute


So, Results in unsafe states

VINUTHA M S, DEPT OF CSE,Dr AIT 31


Resource-Allocation-Graph
Algorithm
• Example of Single Instances RAG

process Allocation Request


Resources Resources
R1 R2 R3 R1 R2 R3
P1 0 1 0 1 0 0
P2 1 0 0 0 0 1
P3 0 0 1 0 1 0

VINUTHA M S, DEPT OF CSE,Dr AIT 32


Example of Multiple Instances RAG

Proce Allocation Request


ss Resources Resources
R1 R2 R3 R1 R2 R3
P1 O 1 0 1 0 0
P2 1 0 0 0 0 1
P3 0 0 1 0 1 0
P4 0 0 1 0 0 0

VINUTHA M S, DEPT OF CSE,Dr AIT 33


Banker’s algorithm
We need the following data structures, where n is the number of
threads in the system and m is the number of resource types:
The resource-allocation-graph algorithm is not applicable to a
resource allocation system with multiple instances of each resource  Available: A vector of length m indicates the number of
type deadlock-avoidance algorithm that we describe next is available resources of each type. If Available[j] equals k, then k
applicable to such a system but is less efficient than the resource- instances of resource type Rj are available.
allocation graph scheme.
 Max: An n × m matrix defines the maximum demand of each
This algorithm is commonly known as the banker’s algorithm. The
thread. If Max[i][j] equals k, then thread Ti may request at most k
name was chosen because the algorithm could be used in a banking
instances of resource type Rj .
system to ensure that the bank never allocated its available cash in
such a way that it could no longer satisfy the needs of all its
customers.  Allocation: An n × m matrix defines the number of resources of
each type currently allocated to each thread. If Allocation[i][j]
When a new thread enters the system, it must declare the maximum equals k, then thread Ti is currently allocated k instances of
number of instances of each resource type that it may need. This resource type Rj .
number may not exceed the total number of resources in the system.
 Need: An n × m matrix indicates the remaining resource need of
When a user requests a set of resources, the system must each thread. If Need[i][j] equals k, then thread Ti may need k
determine whether the allocation of these resources will leave the more instances of resource type Rj to complete its task. Note that
system in a safe state. If it will, the resources are allocated; Need[i][j] equals Max[i][j] − Allocation[i][j]
otherwise, the thread must wait until some other thread releases
enough resources.

VINUTHA M S, DEPT OF CSE,Dr AIT 34


VINUTHA M S, DEPT OF CSE,Dr AIT 35
VINUTHA M S, DEPT OF CSE,Dr AIT 36
Problem 2:

VINUTHA M S, DEPT OF CSE,Dr AIT 37


Single Instance of Each Resource Type
Deadlock Detection
If all resources have only a single instance, then we can define a
deadlock detection algorithm that uses a variant of the resource-
• In this approach, The OS doesn't apply any mechanism to avoid allocation graph, called a wait-for graph.
or prevent the deadlocks. Therefore the system considers that We obtain this graph from the resource-allocation graph by
the deadlock will definitely occur. In order to get rid of deadlocks, removing the resource nodes and collapsing the appropriate
The OS periodically checks the system for any deadlock. In case, edges. In Figure , we present a resource-allocation graph and the
it finds any of the deadlock then the OS will recover the system corresponding wait-for graph. As before, a deadlock exists in the
using some recovery techniques. system if and only if the wait-for graph contains a cycle. In the
example between threads T1,T2,T3 and T4 there exits cycle ,so
• The purpose of a deadlock detection algorithm is to identify and deadlock exists.
resolve deadlocks in a computer system.

• It does so by identifying the occurrence of a deadlock,


determining the processes and resources involved, taking
corrective action to break the deadlock, and restoring normal
system operations.

• The algorithm plays a crucial role in ensuring the stability and


reliability of the system by preventing deadlocks from causing
the system to freeze or crash.

VINUTHA M S, DEPT OF CSE,Dr AIT 38


• Several Instances of a Resource Type • Suppose now that thread T2 makes one additional request for an
instance of type C. The Request matrix is modified as follows:
The wait-for graph scheme is not applicable to a resource-allocation
system with multiple instances of each resource type. We turn now
to a deadlock detection algorithm that is applicable to such a system.
The algorithm employs several time-varying data structures that are
similar to those used in the banker’s algorithm

• We claim that the system is now deadlocked. Although we can


reclaim the resources held by thread T0, the number of available
resources is not sufficient to fulfill the requests of the other
threads. Thus, a deadlock exists, consisting of threads T1, T2, T3,
and T4.

We claim that the system is not in a deadlocked state. Indeed, if we


execute our algorithm, we will find that the sequence results in
Finish[i] == true for all i.

VINUTHA M S, DEPT OF CSE,Dr AIT 39


Recovery from Deadlock Advantages of Process Termination
• It is a simple method for breaking a deadlock.
Process Termination • It ensures that the deadlock will be resolved quickly, as all
To eliminate the deadlock, we can simply kill one or more processes involved in the deadlock are terminated
processes. For this, we use two methods: simultaneously.
• It frees up resources that were being used by the
deadlocked processes, making those resources available for
1. Abort all the Deadlocked Processes: Aborting all the
other processes.
processes will certainly break the deadlock but at a great
expense. The deadlocked processes may have been Disadvantages of Process Termination
computed for a long time and the result of those partial • It can result in the loss of data and other resources that
computations must be discarded and there is a were being used by the terminated processes.
probability to recalculate them later. • It may cause further problems in the system if the
terminated processes were critical to the system’s
operation.
2. Abort one process at a time until the deadlock is • It may result in a waste of resources, as the terminated
eliminated: Abort one deadlocked process at a time, until processes may have already completed a significant amount
the deadlock cycle is eliminated from the system. Due to of work before being terminated.
this method, there may be considerable overhead,
because after aborting each process, we have to run a
deadlock detection algorithm to check whether any
processes are still deadlocked.

VINUTHA M S, DEPT OF CSE,Dr AIT 40


Resource Preemption
To eliminate deadlocks using resource
preemption, we preempt some resources from Advantages of Resource Preemption
processes and give those resources to other 1. It can help in breaking a deadlock without terminating
processes. any processes, thus preserving data and resources.
2. It is more efficient than process termination as it
This method will raise three issues – targets only the resources that are causing the
deadlock.
Selecting a victim: We must determine which 3. It can potentially avoid the need for restarting the
resources and which processes are to be preempted system.
and also order to minimize the cost.
Disadvantages of Resource Preemption
Rollback: We must determine what should be done 1. It may lead to increased overhead due to the need for
with the process from which resources are determining which resources and processes should be
preempted. One simple idea is total rollback. That preempted.
means aborting the process and restarting it. 2. It may cause further problems if the preempted
resources were critical to the system’s operation.
Starvation: In a system, it may happen that the same 3. It may cause delays in the completion of processes if
process is always picked as a victim. As a result, that resources are frequently preempted.
process will never complete its designated task. This
situation is called Starvation and must be avoided.
One solution is that a process must be picked as a
victim only a finite number of times.

VINUTHA M S, DEPT OF CSE,Dr AIT 41


Unit-3 9hours

Process Synchronization: Synchronization Tools, Background, The Critical-Section Problem,


Peterson’s Solution, Hardware Support for Synchronization, Mutex Locks, Semaphores, Monitors,
Synchronization Examples: Classic Problems of Synchronization.

Deadlocks: System Model, Deadlock Characterization, Methods for Handling Deadlocks, Deadlock
Prevention, Deadlock Avoidance, Deadlock Detection, Recovery from Deadlock.

VINUTHA M S, DEPT OF CSE,Dr AIT 1


Deadlocks System Model
In a multiprogramming environment, several threads may compete for a • A system consists of a finite number of resources to be distributed
finite number of resources. A thread requests resources; if the resources among a number of competing threads. The resources may be
are not available at that time, the thread enters a waiting state. partitioned into several types (or classes), each consisting of some
number of identical instances.
Sometimes, a waiting thread can never again change state, because the
resources it has requested are held by other waiting threads. This • CPU cycles, files, and I/O devices (such as network interfaces and DVD
situation is called a deadlock. drives) are examples of resource types. If a thread requests an instance
of a resource type, the allocation of any instance of the type should
satisfy the request. If it does not, then the instances are not identical,
Consider an example when two trains are coming toward each other on and the resource type classes have not been defined properly
the same track and there is only one track, none of the trains can move • A thread must request a resource before using it and must release the
once they are in front of each other. resource after using it. A thread may request as many resources as it
requires to carry out its designated task. Obviously, the number of
A similar situation occurs in operating systems when there are two or resources requested may not exceed the total number of resources
more processes that hold some resources and wait for resources held by available in the system. In other words, a thread cannot request two
network interfaces if the system has only one.
other(s). For example, in the below diagram, Process 1 is holding
Resource 1 and waiting for resource 2 which is acquired by process 2, • Under the normal mode of operation, a thread may utilize a resource in
only the following sequence:
and process 2 is waiting for resource 1.
1. Request: The thread requests the resource. If the request cannot be
granted immediately (for example, if a mutex lock is currently held by
another thread), then the requesting thread must wait until it can acquire
the resource.
2. Use: The thread can operate on the resource (for example, if the resource
is a mutex lock, the thread can access its critical section).
3. Release: The thread releases the resource.

VINUTHA M S, DEPT OF CSE,Dr AIT 2


Deadlock Characterization
A deadlock happens in operating system when two or more processes need
some resource to complete their execution that is held by the other Hold and wait
process. A deadlock situation can arise if the following four conditions hold
simultaneously in a system • A thread must be holding at least one resource and waiting to
acquire additional resources that are currently being held by
Mutual Exclusion other threads.
• At least one resource must be held in a non-sharable mode; that is, only • A process can hold multiple resources and still request more
one thread at a time can use the resource. If another thread requests resources from other processes which are holding them. In the
that resource, the requesting thread must be delayed until the resource diagram given below, Process 2 holds Resource 2 and Resource 3
has been released. and is requesting the Resource 1 which is held by Process 1.
• There should be a resource that can only be held by one process at a
time. In the diagram below, there is a single instance of Resource 1 and it
is held by Process 1 only.
• The mutual exclusion condition prevents two or more processes to
access the same resource at a time.

VINUTHA M S, DEPT OF CSE,Dr AIT 3


3. No preemption 4. Circular wait
• Resources cannot be preempted; that is, a resource can be • A set {T0, T1, ..., Tn} of waiting threads must exist such that T0
released only voluntarily by the thread holding it, after that is waiting for a resource held by T1, T1 is waiting for a resource
thread has completed its task. held by T2, ..., Tn−1 is waiting for a resource held by Tn, and Tn is
waiting for a resource held by T0.
• A resource cannot be preempted from a process by force. A
process can only release a resource voluntarily. In the diagram • A process is waiting for the resource held by the second process,
below, Process 2 cannot preempt Resource 1 from Process 1. It which is waiting for the resource held by the third process and so
will only be released when Process 1 relinquishes it voluntarily on, till the last process is waiting for a resource held by the first
after its execution is complete. process. This forms a circular chain. For example: Process 1 is
allocated Resource2 and it is requesting Resource 1. Similarly,
Process 2 is allocated Resource 1 and it is requesting Resource 2.
This forms a circular wait loop.

VINUTHA M S, DEPT OF CSE,Dr AIT 4


• When thread Ti requests an instance of resource type Rj , a request
Resource-Allocation Graph edge is inserted in the resource-allocation graph. When this
request can be fulfilled, the request edge is instantaneously
• Deadlocks can be described more precisely in terms of a directed transformed to an assignment edge. When the thread no longer
graph called a system resource-allocation graph. This graph needs access to the resource, it releases the resource. As a result,
consists of a set of vertices V and a set of edges E. the assignment edge is deleted.
• The set of vertices V is partitioned into two different types of
nodes:
 T = {T1, T2, ..., Tn}, the set consisting of all the active threads in
the system, and
 R = {R1, R2, ..., Rm}, the set consisting of all resource types in the
system.
• A directed edge from thread Ti to resource type Rj is denoted by
Ti → Rj; it signifies that thread Ti has requested an instance of
resource type Rj and is currently waiting for that resource. A
directed edge from resource type Rj to thread Ti is denoted by Rj
→ Ti ; it signifies that an instance of resource type Rj has been
allocated to thread Ti .
• A directed edge Ti → Rj is called a request edge; a directed edge
Rj → Ti is called an assignment edge.
• Pictorially, we represent each thread Ti as a circle and each
resource type Rj as a rectangle. Since resource type Rj may have
more than one instance, we represent each such instance as a dot
within the rectangle.
• Note that a request edge points only to the rectangle Rj , whereas
an assignment edge must also designate one of the dots in the
rectangle.

VINUTHA M S, DEPT OF CSE,Dr AIT 5


The resource-allocation graph shown in Figure depicts the following
situation.
The sets T, R, and E:
• T = {T1, T2, T3}
• R = {R1, R2, R3, R4}
• E = {T1 → R1, T2 → R3, R1 → T2, R2 → T2, R2 → T1, R3 → T3}
Resource instances:
 One instance of resource type R1
 Two instances of resource type R2
 One instance of resource type R3
 Three instances of resource type R4
Thread states:
 Thread T1 is holding an instance of resource type R2 and is waiting Given the definition of a resource-allocation graph, it can be
for an instance of resource type R1. shown that, if the graph contains no cycles, then no thread in the
 Thread T2 is holding an instance of R1 and an instance of R2 and is system is deadlocked. If the graph does contain a cycle, then a
waiting for an instance of R3. deadlock may exist
 Thread T3 is holding an instance of R3.

VINUTHA M S, DEPT OF CSE,Dr AIT 6


Suppose that thread T3 requests an instance of resource type R2.
Since no resource instance is currently available, we add a request
edge T3 → R2 to the graph. Now consider the resource-allocation graph below . In this example,
we also haveacycle:
At this point, two minimal cycles exist in the system: T1→R1→T3→R2→T1
• T1 → R1 → T2 → R3 → T3 → R2 → T1
• T2 → R3 → T3 → R2 → T2
Threads T1, T2, and T3 are deadlocked.
Thread T2 is waiting for the resource R3, which is held by thread T3.
Thread T3 is waiting for either thread T1 or thread T2 to release
resource R2.
In addition, thread T1 is waiting for thread T2 to release resource R1

However, there is no deadlock. Observe that thread T4 may


release its instance of resource type R2. That resource can then be
allocated to T3, breaking the cycle.

In summary, if a resource-allocation graph does not have a cycle,


then the system is not in a deadlocked state. If there is a cycle,
then the system may or may not be in a deadlocked state.

VINUTHA M S, DEPT OF CSE,Dr AIT 7


Methods for Handling Dead locks
2. Deadlock prevention
• Deadlock happens only when Mutual Exclusion, hold and
1. Deadlock Ignorance wait, No preemption and circular wait holds simultaneously. If
• Deadlock Ignorance is the most widely used approach among it is possible to violate one of the four conditions at any time
all the mechanism. This is being used by many operating then the deadlock can never occur in the system.
systems mainly for end user uses. In this approach, the • The idea behind the approach is very simple that we have to fail
Operating system assumes that deadlock never occurs. It simply one of the four conditions but there can be a big argument on
ignores deadlock. This approach is best suitable for a single its physical implementation in the system.
end user system where User uses the system only for
browsing and all other normal stuff. 3. Deadlock avoidance
• There is always a tradeoff between Correctness and • In deadlock avoidance, the operating system checks whether
performance. The operating systems like Windows and Linux the system is in safe state or in unsafe state at every step
mainly focus upon performance. However, the performance of which the operating system performs. The process continues
the system decreases if it uses deadlock handling mechanism until the system is in safe state. Once the system moves to
all the time if deadlock happens 1 out of 100 times then it is unsafe state, the OS has to backtrack one step.
completely unnecessary to use the deadlock handling • In simple words, The OS reviews each allocation so that the
mechanism all the time. allocation doesn't cause the deadlock in the system.
• In these types of systems, the user has to simply restart the
4. Deadlock detection and recovery
computer in the case of deadlock. Windows and Linux are
• This approach let the processes fall in deadlock and
mainly using this approach.
then periodically check whether deadlock occur in the
system or not. If it occurs then it applies some of the
recovery methods to the system to get rid of
VINUTHA M S, DEPT OF deadlock.
CSE,Dr AIT 8
Eliminate Mutual Exclusion: It is not possible to dis-satisfy the mutual
Deadlock Prevention exclusion because some resources, such as the tape drive and printer, are
• Deadlock prevention is a technique used in computer science to avoid inherently non-shareable.
situations where multiple processes or threads are blocked and unable Mutual section from the resource point of view is the fact that a resource
to proceed because they are waiting for each other to release can never be used by more than one process simultaneously which is fair
resources that they need to complete their tasks. enough but that is the main reason behind the deadlock. If a resource
• The main goal of deadlock prevention is to ensure that resources are could have been used by more than one process at the same time then
used efficiently and that situations where a deadlock may occur are the process would have never been waiting for any resource.
avoided. By implementing appropriate techniques for preventing However, if we can be able to violate resources behaving in the mutually
deadlocks, computer systems can improve their efficiency, reduce exclusive manner then the deadlock can be prevented.
delays, and avoid the risk of system failure caused by deadlocks.
Eliminate Hold and wait: Allocate all required resources to the
• Deadlock prevention techniques may include resource allocation process before the start of its execution, this way hold and wait
graph, preemptive scheduling, detection and recovery, or a condition is eliminated but it will lead to low device utilization.
combination of these methods. We can prevent a Deadlock by
eliminating any of the above four conditions. for example, if a process requires a printer at a later time and we have
allocated a printer before the start of its execution printer will remain
blocked till it has completed its execution. The process will make a new
request for resources after releasing the current set of resources. This
solution may lead to starvation.

VINUTHA M S, DEPT OF CSE,Dr AIT 9


• Eliminate No Preemption :
 Preempt resources from the process when resources are required by other high-priority processes. Deadlock arises due to the fact that
a process can't be stopped once it starts.
 However, if we take the resource away from the process which is causing deadlock then we can prevent deadlock.
 If a thread is holding some resources and requests another resource that cannot be immediately allocated to it (that is, the thread
must wait), then all resources the thread is currently holding are preempted. In other words, these resources are implicitly released.
 The preempted resources are added to the list of resources for which the thread is waiting.

• Eliminate Circular Wait:


 To violate circular wait, we can assign a priority number to each of the resource.
 A process can't request for a lesser priority resource.
 This ensures that not a single process can request a resource which is being utilized by some other process and no cycle will be
formed. A process can request the resources to increase/decrease order of numbering.
 For Example, if the P1 process is allocated R5 resources, now next time if P1 asks for R4, R3 lesser than R5 such a request will not be
granted, only a request for resources more than R5 will be granted.

VINUTHA M S, DEPT OF CSE,Dr AIT 10

You might also like