Chapter No 5: Deadlocks
• System Model
• Deadlock Characterization
• Methods for Handling Deadlocks
• Deadlock Prevention
• Deadlock Avoidance
• Deadlock Detection
• Recovery from Deadlock
• Combined Approach to Deadlock Handling
Operating System Concepts 1
The Deadlock Problem
• A set of blocked processes each holding a resource and waiting to acquire a
resource held by another process in the set.
• Example
– System has 2 tape drives.
– P1 and P2 each hold one tape drive and each needs another one.
• Example
– semaphores A and B, initialized to 1
P0 P1
wait (A); wait(B)
wait (B); wait(A)
Operating System Concepts 2
Bridge Crossing Example
• Traffic only in one direction.
• Each section of a bridge can be viewed as a resource.
• If a deadlock occurs, it can be resolved if one car backs up (preempt
resources and rollback).
• Several cars may have to be backed upif a deadlock occurs.
• Starvation is possible.
Operating System Concepts 3
System Model
• Resource types R1, R2, . . ., Rm
CPU cycles, memory space, I/O devices
• Each resource type Ri has Wi instances.
• Each process utilizes a resource as follows:
– request
– use
– release
Operating System Concepts 4
Deadlock Characterization
• Mutual exclusion: only one process at a time can use a resource.
• Hold and wait: a process holding at least one resource is waiting to
acquire additional resources held by other processes.
• No preemption: a resource can be released only voluntarily by the
process holding it, after that process has completed its task.
• Circular wait: there exists a set {P0, P1, …, P0} of waiting processes
such that P0 is waiting for a resource that is held by
P1, P1 is waiting for a resource that is held by P2, …, Pn–1 is waiting for
a resource that is held by Pn, and P0 is waiting for a resource that is
held by P0.
Operating System Concepts
Deadlock can arise if four conditions hold simultaneously.
5
Resource-Allocation Graph
• V is partitioned into two types:
– P = {P1, P2, …, Pn}, the set consisting of all the processes in the
system.
– R = {R1, R2, …, Rm}, the set consisting of all resource types in the
system.
• request edge – directed edge P1  Rj
• assignment edge – directed edge Rj  Pi
Operating System Concepts
A set of vertices V and a set of edges E.
6
Resource-Allocation Graph (Cont.)
• Process
• Resource Type with 4 instances
• Pi requests instance of Rj
• Pi is holding an instance of Rj
Operating System Concepts
Pi
Pi
Rj
Rj
7
Example of a Resource Allocation
Graph
Operating System Concepts 8
Resource Allocation Graph With A
Deadlock
Operating System Concepts 9
Resource Allocation Graph With A Cycle But No Deadlock
Operating System Concepts 10
Basic Facts
• If graph contains no cycles  no deadlock.
• If graph contains a cycle 
– if only one instance per resource type, then deadlock.
– if several instances per resource type, possibility of deadlock.
Operating System Concepts 11
Methods for Handling Deadlocks
• Ensure that the system will never enter a deadlock state.
• Allow the system to enter a deadlock state and then recover.
• Ignore the problem and pretend that deadlocks never occur in the system; used by
most operating systems, including UNIX.
Operating System Concepts 12
Deadlock Prevention
• Mutual Exclusion – not required for sharable resources; must hold for
nonsharable resources.
• Hold and Wait – must guarantee that whenever a process requests a
resource, it does not hold any other resources.
– Require process to request and be allocated all its resources
before it begins execution, or allow process to request resources
only when the process has none.
– Low resource utilization; starvation possible.
Operating System Concepts
Restrain the ways request can be made.
13
Deadlock Prevention (Cont.)
• No Preemption –
– If a process that is holding some resources requests another resource that
cannot be immediately allocated to it, then all resources currently being held
are released.
– Preempted resources are added to the list of resources for which the process
is waiting.
– Process will be restarted only when it can regain its old resources, as well as
the new ones that it is requesting.
• Circular Wait – impose a total ordering of all resource types, and require that each
process requests resources in an increasing order of enumeration.
Operating System Concepts 14
Deadlock Avoidance
• Simplest and most useful model requires that each process declare
the maximum number of resources of each type that it may need.
• The deadlock-avoidance algorithm dynamically examines the
resource-allocation state to ensure that there can never be a circular-
wait condition.
• Resource-allocation state is defined by the number of available and
allocated resources, and the maximum demands of the processes.
Operating System Concepts
Requires that the system has some additional a priori information
available.
15
Safe State
• When a process requests an available resource, system must decide if immediate
allocation leaves the system in a safe state.
• System is in safe state if there exists a safe sequence of all processes.
• Sequence <P1, P2, …, Pn> is safe if for each Pi, the resources that Pi can still request
can be satisfied by currently available resources + resources held by all the Pj, with
j<I.
– If Pi resource needs are not immediately available, then Pi can wait until all Pj
have finished.
– When Pj is finished, Pi can obtain needed resources, execute, return allocated
resources, and terminate.
– When Pi terminates, Pi+1 can obtain its needed resources, and so on.
Operating System Concepts 16
Basic Facts
• If a system is in safe state  no deadlocks.
• If a system is in unsafe state  possibility of deadlock.
• Avoidance  ensure that a system will never enter an unsafe state.
Operating System Concepts 17
Safe, unsafe , deadlock state spaces
Operating System Concepts 18
Resource-Allocation Graph Algorithm
• Claim edge Pi  Rj indicated that process Pj may request resource Rj; represented
by a dashed line.
• Claim edge converts to request edge when a process requests a resource.
• When a resource is released by a process, assignment edge reconverts to a claim
edge.
• Resources must be claimed a priori in the system.
Operating System Concepts 19
Resource-Allocation Graph For Deadlock Avoidance
Operating System Concepts 20
Unsafe State In A Resource-Allocation
Graph
Operating System Concepts 21
Banker’s Algorithm
• Multiple instances.
• Each process must a priori claim maximum use.
• When a process requests a resource it may have to wait.
• When a process gets all its resources it must return them in a finite amount of
time.
Operating System Concepts 22
Data Structures for the Banker’s
Algorithm
• Available: Vector of length m. If available [j] = k, there are k instances
of resource type Rj available.
• Max: n x m matrix. If Max [i,j] = k, then process Pi may request at
most k instances of resource type Rj.
• Allocation: n x m matrix. If Allocation[i,j] = k then Pi is currently
allocated k instances of Rj.
• Need: n x m matrix. If Need[i,j] = k, then Pi may need k more
instances of Rj to complete its task.
Need [i,j] = Max[i,j] – Allocation [i,j].
Operating System Concepts
Let n = number of processes, and m = number of resources types.
23
Safety Algorithm
1. Let Work and Finish be vectors of length m and n, respectively. Initialize:
Work := Available
Finish [i] = false for i - 1,3, …, n.
2. Find and i such that both:
(a) Finish [i] = false
(b) Needi  Work
If no such i exists, go to step 4.
3. Work := Work + Allocationi
Finish[i] := true
go to step 2.
4. If Finish [i] = true for all i, then the system is in a safe state.
Operating System Concepts 24
Resource-Request Algorithm for
Process Pi
Requesti = request vector for process Pi. If Requesti [j] = k then process Pi wants k
instances of resource type Rj.
1. If Requesti  Needi go to step 2. Otherwise, raise error condition, since
process has exceeded its maximum claim.
2. If Requesti  Available, go to step 3. Otherwise Pi must wait, since resources
are not available.
3. Pretend to allocate requested resources to Pi by modifying the state as
follows:
Available := Available = Requesti;
Allocationi := Allocationi + Requesti;
Needi := Needi – Requesti;;
• If safe  the resources are allocated to Pi.
• If unsafe  Pi must wait, and the old resource-allocation state is restored
Operating System Concepts 25
Example of Banker’s Algorithm
• 5 processes P0 through P4; 3 resource types A (10 instances),
B (5instances, and C (7 instances).
• Snapshot at time T0:
Allocation Max Available
A B C A B C A B C
P0 0 1 0 7 5 3 3 3 2
P1 2 0 0 3 2 2
P2 3 0 2 9 0 2
P3 2 1 1 2 2 2
P4 0 0 2 4 3 3
Operating System Concepts 26
Example (Cont.)
• The content of the matrix. Need is defined to be Max – Allocation.
Need
A B C
P0 7 4 3
P1 1 2 2
P2 6 0 0
P3 0 1 1
P4 4 3 1
• The system is in a safe state since the sequence < P1, P3, P4, P2, P0> satisfies
safety criteria.
Operating System Concepts 27
Example (Cont.): P1 request (1,0,2)
• Check that Request  Available (that is, (1,0,2)  (3,3,2)  true.
Allocation Need Available
A B C A B C A B C
P0 0 1 0 7 4 3 2 3 0
P1 3 0 2 0 2 0
P2 3 0 1 6 0 0
P3 2 1 1 0 1 1
P4 0 0 2 4 3 1
• Executing safety algorithm shows that sequence <P1, P3, P4, P0, P2>
satisfies safety requirement.
• Can request for (3,3,0) by P4 be granted?
• Can request for (0,2,0) by P0 be granted?
Operating System Concepts 28
Deadlock Detection
• Allow system to enter deadlock state
• Detection algorithm
• Recovery scheme
Operating System Concepts 29
Single Instance of Each Resource Type
• Maintain wait-for graph
– Nodes are processes.
– Pi  Pj if Pi is waiting for Pj.
• Periodically invoke an algorithm that searches for acycle in the graph.
• An algorithm to detect a cycle in a graph requires an order of n2 operations, where
n is the number of vertices in the graph.
Operating System Concepts 30
Resource-Allocation Graph And Wait-for Graph
Operating System Concepts
Resource-Allocation Graph Corresponding wait-for graph
31
Several Instances of a Resource Type
• Available: A vector of length m indicates the number of available
resources of each type.
• Allocation: An n x m matrix defines the number of resources of each
type currently allocated to each process.
• Request: An n x m matrix indicates the current request of each
process. If Request [ij] = k, then process Pi is requesting k more
instances of resource type. Rj.
Operating System Concepts 32
Detection Algorithm
1. Let Work and Finish be vectors of length m and n, respectively Initialize:
(a) Work :- Available
(b) For i = 1,2, …, n, if Allocationi  0, then
Finish[i] := false;otherwise, Finish[i] := true.
2. Find an index i such that both:
(a) Finish[i] = false
(b) Requesti  Work
If no such i exists, go to step 4.
Operating System Concepts 33
Detection Algorithm (Cont.)
3. Work := Work + Allocationi
Finish[i] := true
go to step 2.
4. If Finish[i] = false, for some i, 1  i  n, then the system is in deadlock
state. Moreover, if Finish[i] = false, then Pi is deadlocked.
Operating System Concepts
Algorithm requires an order of m x n2 operations to detect whether
the system is in deadlocked state.
34
Example of Detection Algorithm
• Five processes P0 through P4; three resource types
A (7 instances), B (2 instances), and C (6 instances).
• Snapshot at time T0:
Allocation Request Available
A B C A B C A B C
P0 0 1 0 0 0 0 0 0 0
P1 2 0 0 2 0 2
P2 3 0 3 0 0 0
P3 2 1 1 1 0 0
P4 0 0 2 0 0 2
• Sequence <P0, P2, P3, P1, P4> will result in Finish[i] = true for all i.
Operating System Concepts 35
Example (Cont.)
• P2 requests an additional instance of type C.
Request
A B C
P0 0 0 0
P1 2 0 1
P2 0 0 1
P3 1 0 0
P4 0 0 2
• State of system?
– Can reclaim resources held by process P0, but insufficient resources to fulfill
other processes; requests.
– Deadlock exists, consisting of processes P1, P2, P3, and P4.
Operating System Concepts 36
Detection-Algorithm Usage
• When, and how often, to invoke depends on:
– How often a deadlock is likely to occur?
– How many processes will need to be rolled back?
• one for each disjoint cycle
• If detection algorithm is invoked arbitrarily, there may be many cycles in the
resource graph and so we would not be able to tell which of the many deadlocked
processes “caused” the deadlock.
Operating System Concepts 37
Recovery from Deadlock: Process Termination
• Abort all deadlocked processes.
• Abort one process at a time until the deadlock cycle is eliminated.
• In which order should we choose to abort?
– Priority of the process.
– How long process has computed, and how much longer to completion.
– Resources the process has used.
– Resources process needs to complete.
– How many processes will need to be terminated.
– Is process interactive or batch?
Operating System Concepts 38
Recovery from Deadlock: Resource Preemption
• Selecting a victim – minimize cost.
• Rollback – return to some safe state, restart process fro that state.
• Starvation – same process may always be picked as victim, include number of
rollback in cost factor.
Operating System Concepts 39
Combined Approach to Deadlock
Handling
• Combine the three basic approaches
– prevention
– avoidance
– detection
allowing the use of the optimal approach for each of resources in the system.
• Partition resources into hierarchically ordered classes.
• Use most appropriate technique for handling deadlocks within each class.
Operating System Concepts 40
What is a Thread?
• A thread is a path of execution within a
process. A process can contain multiple
threads.
• Why Multithreading?
A thread is also known as lightweight process.
The idea is to achieve parallelism by dividing a
process into multiple threads.
• For example, in a browser, multiple tabs can be
different threads. MS Word uses multiple
threads: one thread to format the text, another
thread to process inputs, etc.
Operating System Concepts 41
Process vs Thread?
• The primary difference is that threads within
the same process run in a shared memory
space, while processes run in separate
memory spaces.
Threads are not independent of one another
like processes are, and as a result threads
share with other threads their code section,
data section, and OS resources (like open files
and signals). But, like process, a thread has its
own program counter (PC), register set, and
stack space. Operating System Concepts 42
Advantages of Thread over Process
• 1. Responsiveness: If the process is divided into multiple
threads, if one thread completes its execution, then its
output can be immediately returned.
• 2. Faster context switch: Context switch time between
threads is lower compared to process context switch.
Process context switching requires more overhead from
the CPU.
• 3. Effective utilization of multiprocessor system: If we
have multiple threads in a single process, then we can
schedule multiple threads on multiple processor. This
will make process execution faster.
Operating System Concepts 43
Advantages of Thread over Process
• 4. Resource sharing: Resources like code, data, and files
can be shared among all threads within a process.
Note: stack and registers can’t be shared among the
threads. Each thread has its own stack and registers.
• 5. Communication: Communication between multiple
threads is easier, as the threads shares common address
space. while in process we have to follow some specific
communication technique for communication between
two process.
• 6. Enhanced throughput of the system: If a process is
divided into multiple threads, and each thread function
is considered as one job, then the number of jobs
completed per unit of time is increased, thus increasing
the throughput of the system.
Operating System Concepts 44
Benefits of Threads
• Enhanced system throughput: The number of jobs
completed per unit time increases when the process
is divided into numerous threads, and each thread is
viewed as a job. As a result, the system’s throughput
likewise increases.
• Effective use of a Multiprocessor system: You can
schedule multiple threads in multiple processors
when you have many threads in a single process.
• Faster context switch: The thread context switching
time is shorter than the process context switching
time. The process context switch adds to the CPU’s
workload.
Operating System Concepts 45
Benefits of Threads
• Responsiveness: When a process is divided into many
threads, and each of them completes its execution, then
the process can be responded to as quickly as possible.
• Communication: Multiple-thread communication is
straightforward because the threads use the same
address space, while communication between two
processes is limited to a few exclusive communication
mechanisms.
• Resource sharing: Code, data, and files, for example,
can be shared among all threads in a process. Note that
threads cannot share the stack or register. Each thread
has its own stack and register.
Operating System Concepts 46
Why Do We Need Thread?
 Creating a new thread in a current process
requires significantly less time than
creating a new process.
 Threads can share common data without
needing to communicate with each other.
 When working with threads, context
switching is faster.
 Terminating a thread requires less time
than terminating a process.
Operating System Concepts 47
Types of Threads
• There are two types of threads.
User Level Thread
Kernel Level Thread
Operating System Concepts 48
User-Level Thread
• The user-level thread is ignored by the
operating system. User threads are simple to
implement and are done so by the user.
• The entire process is blocked if a user executes
a user-level operation of thread blocking.
• The kernel-level thread is completely unaware
of the user-level thread.
• User-level threads are managed as single-
threaded processes by the kernel-level thread.
Operating System Concepts 49
User-Level Thread
• The user-level thread is
ignored by the operating
system. User threads are
simple to implement and
are done so by the user.
• The entire process is
blocked if a user executes
a user-level operation of
thread blocking.
• The kernel-level thread is
completely unaware of
the user-level thread.
• User-level threads are
managed as single-
threaded processes by
the kernel-level thread. Operating System Concepts 50
Pros and Cons of ULT
• Pros
1. User threads are easier to implement than kernel threads.
2. Threads at the user level can be used in operating systems that do
not allow threads at the kernel level.
3. It is more effective and efficient.
4. Context switching takes less time than kernel threads.
5. It does not necessitate any changes to the operating system.
6. The representation of user-level threads is relatively
straightforward. The user-level process’s address space contains
the register, stack, PC, and mini thread control blocks.
7. Threads may be easily created, switched, and synchronised
without the need for process interaction.
• Cons
1. Threads at the user level are not coordinated with the kernel.
2. The entire operation is halted if a thread creates a page fault.
Operating System Concepts 51
Kernel-Level Thread
• The operating system is recognized by the kernel thread.
• Each thread and process in the kernel-level thread has
its own thread control block as well as process control
block in the system.
• The operating system implements the kernel-level
thread. The kernel is aware of all threads and controls
them.
• The kernel-level thread provides a system call for user-
space thread creation and management.
• Kernel threads are more complex to build than user
threads.
• The kernel thread’s context switch time is longer.
• The execution of the Banky thread can continue in case
a kernel thread performs a blocking operation.
Operating System Concepts 52
Pros and Cons of KLT
• Pros
1. All threads are completely aware of the kernel-level
thread.
2. The scheduler may decide to devote extra CPU time to
threads with large numerical values.
3. Applications that happen to block the frequency should
use the kernel-level thread.
• Cons
1. All threads are managed and scheduled by the kernel
thread.
2. Kernel threads are more complex to build than user
threads.
3. Kernel-level threads are slower than user-level threads.
Operating System Concepts 53
Interprocess Synchronization
• Background
• The Critical-Section Problem
• Synchronization Hardware
• Semaphores
• Classical Problems of Synchronization
• Critical Regions
• Monitors
• Synchronization in Solaris 2
• Atomic Transactions
Operating System Concepts 54
Background
• Concurrent access to shared data may result in data inconsistency.
• Maintaining data consistency requires mechanisms to ensure the orderly execution
of cooperating processes.
• Shared-memory solution to bounded-butter problem (Chapter 4) allows at most n
– 1 items in buffer at the same time. A solution, where all N buffers are used is not
simple.
– Suppose that we modify the producer-consumer code by adding a variable
counter, initialized to 0 and incremented each time a new item is added to the
buffer
Operating System Concepts 55
What is Process Synchronization in OS?
• An operating system is a software that manages
all applications on a device and basically helps in
the smooth functioning of our computer.
• Because of this reason, the operating system has
to perform many tasks, and sometimes
simultaneously.
• This isn't usually a problem unless these
simultaneously occurring processes use a
common resource.
Operating System Concepts 56
Example of Process Synchronization in OS
• For example, consider a bank that stores the account
balance of each customer in the same database.
• Now suppose you initially have x rupees in your account.
Now, you take out some amount of money from your
bank account, and at the same time, someone tries to
look at the amount of money stored in your account.
• As you are taking out some money from your account,
after the transaction, the total balance left will be lower
than x.
• But, the transaction takes time, and hence the person
reads x as your account balance which leads to
inconsistent data.
• If in some way, we could make sure that only one process
occurs at a time, we could ensure consistent data.
Operating System Concepts 57
Bounded-Buffer
• Shared data type item = … ;
var buffer array [0..n-1] of item;
in, out: 0..n-1;
counter: 0..n;
in, out, counter := 0;
• Producer process
repeat
…
produce an item in nextp
…
while counter = n do no-op;
buffer [in] := nextp;
in := in + 1 mod n;
counter := counter +1;
until false;
Operating System Concepts 58
Bounded-Buffer (Cont.)
• Consumer process
repeat
while counter = 0 do no-op;
nextc := buffer [out];
out := out + 1 mod n;
counter := counter – 1;
…
consume the item in nextc
…
until false;
• The statements:
– counter := counter + 1;
– counter := counter - 1;
must be executed atomically.
Operating System Concepts 59
The Critical-Section Problem
• n processes all competing to use some shared data
• Each process has a code segment, called critical section, in which the
shared data is accessed.
• Problem – ensure that when one process is executing in its critical
section, no other process is allowed to execute in its critical section.
• Structure of process Pi
repeat
entry section
critical section
exit section
reminder section
until false;
Operating System Concepts 60
Solution to Critical-Section Problem
1. Mutual Exclusion. If process Pi is executing in its critical section, then no other
processes can be executing in their critical sections.
2. Progress. If no process is executing in its critical section and there exist some
processes that wish to enter their critical section, then the selection of the
processes that will enter the critical section next cannot be postponed indefinitely.
3. Bounded Waiting. A bound must exist on the number of times that other
processes are allowed to enter their critical sections after a process has made a
request to enter its critical section and before that request is granted.
 Assume that each process executes at a nonzero speed
 No assumption concerning relative speed of the n processes.
Operating System Concepts 61
Initial Attempts to Solve Problem
• Only 2 processes, P0 and P1
• General structure of process Pi (other process Pj)
repeat
entry section
critical section
exit section
reminder section
until false;
• Processes may share some common variables to synchronize their actions.
Operating System Concepts 62
Peterson’s Solution
• Two process solution
• Assume that the LOAD and STORE instructions are
atomic; that is, cannot be interrupted
• The two processes share two variables:
– int turn;
– Boolean flag[2]
• The variable turn indicates whose turn it is to enter
the critical section
• The flag array is used to indicate if a process is ready
to enter the critical section. flag[i] = true implies that
process Pi is ready
do {
flag[i] = TRUE;
turn = j;
while (flag[j] && turn == j);
critical section
flag[i] = FALSE;
remainder section
} while (TRUE);
• Provable that
1. Mutual exclusion is preserved
2. Progress requirement is satisfied
3. Bounded-waiting requirement is met
Algorithm for Process Pi
Synchronization Hardware
• Many systems provide hardware support for critical section code
• Uniprocessors – could disable interrupts
– Currently running code would execute without preemption
– Generally too inefficient on multiprocessor systems
• Operating systems using this not broadly scalable
• Modern machines provide special atomic hardware instructions
• Atomic = non-interruptable
– Either test memory word and set value
– Or swap contents of two memory words
do {
acquire lock
critical section
release lock
remainder section
} while (TRUE);
Solution to Critical-section
Problem Using Locks
Synchronization Hardware
• Many systems provide hardware support for critical section code
• Uniprocessors – could disable interrupts
– Currently running code would execute without preemption
– Generally too inefficient on multiprocessor systems
• Operating systems using this not broadly scalable
• Modern machines provide special atomic hardware instructions
• Atomic = non-interruptable
– Either test memory word and set value
– Or swap contents of two memory words
TestAndSet Instruction
boolean TestAndSet (boolean *target)
{
boolean rv = *target;
*target = TRUE;
return rv:
}
Solution using TestAndSet
• Shared boolean variable lock, initialized to FALSE
• Solution:
do {
while ( TestAndSet (&lock ))
; // do nothing
// critical section
lock = FALSE;
// remainder section
} while (TRUE);
Swap Instruction
void Swap (boolean *a, boolean *b)
{
boolean temp = *a;
*a = *b;
*b = temp:
}
Solution using Swap
• Shared Boolean variable lock initialized to FALSE; Each process has a
local Boolean variable key
• Solution:
do {
key = TRUE;
while ( key == TRUE)
Swap (&lock, &key );
// critical section
lock = FALSE;
// remainder section
} while (TRUE);
Bounded-waiting Mutual Exclusion with TestAndSet()
do { waiting[i] = TRUE;
key = TRUE;
while (waiting[i] && key)
key = TestAndSet(&lock);
waiting[i] = FALSE;
// critical section
j = (i + 1) % n;
while ((j != i) && !waiting[j])
j = (j + 1) % n;
if (j == i)
lock = FALSE;
else
waiting[j] = FALSE;
// remainder section
} while (TRUE);
Semaphore
• Synchronization tool that does not require busy waiting
• Semaphore S – integer variable
• Two standard operations modify
• S: wait() and signal()
– Originally called P() and V()
• Less complicated
• Can only be accessed via two indivisible (atomic)
operations
– wait (S) { while S <= 0
; // no-op
S--; }
– signal (S) {
S++;
}
Semaphore as General Synchronization Tool
• Counting semaphore – integer value can range over an
unrestricted domain
• Binary semaphore – integer value can range only between 0
and 1; can be simpler to implement
– Also known as mutex locks
• Can implement a counting semaphore S as a binary semaphore
• Provides mutual exclusion
Semaphore mutex; // initialized to 1
do {
wait (mutex);
// Critical Section
signal (mutex);
// remainder section
} while (TRUE);
Semaphore Implementation
• Must guarantee that no two processes can execute wait () and
signal () on the same semaphore at the same time
• Thus, implementation becomes the critical section problem
where the wait and signal code are placed in the critical section
– Could now have busy waiting in critical section
implementation
• But implementation code is short
• Little busy waiting if critical section rarely occupied
• Note that applications may spend lots of time in critical sections
and therefore this is not a good solution
Busy waiting wastes CPU cycle that some other processes might
be able to use productively this type of semaphore also known as
spinlock because process spins while waiting for lock
Semaphore Implementation with no Busy waiting
• With each semaphore there is an associated waiting
queue
• Each entry in a waiting queue has two data items:
– value (of type integer)
– pointer to next record in the list
• Two operations:
– block – place the process invoking the operation on the
appropriate waiting queue
– wakeup – remove one of processes in the waiting
queue and place it in the ready queue
Semaphore Implementation with no Busy waiting
• Definition of semaphore
typedef struct {
int value;
struct process * list;
}semaphore;
• Implementation of wait:
wait(semaphore *S) {
S->value--;
if (S->value < 0) {
add this process to S->list;
block(); }
}
• Implementation of signal:
signal(semaphore *S) {
S->value++;
if (S->value <= 0) {
remove a process P from S->list;
wakeup(P); }
}
Deadlock and Starvation
• Deadlock – two or more processes are waiting indefinitely for an event that
can be caused by only one of the waiting processes
• Let S and Q be two semaphores initialized to 1
P0 P1
wait (S); wait (Q);
wait (Q); wait (S);
. .
signal (S); signal (Q);
signal (Q); signal (S);
• Starvation – indefinite blocking
– A process may never be removed from the semaphore queue in which it is
suspended
• Priority Inversion – Scheduling problem when lower-priority process holds a
lock needed by higher-priority process
– Solved via priority-inheritance protocol
Classical Problems of Synchronization
• Classical problems used to test newly-proposed
synchronization schemes
– Bounded-Buffer Problem
– Readers and Writers Problem
– Dining-Philosophers Problem
Bounded-Buffer Problem
• N buffers, each can hold one item
• Semaphore mutex initialized to the value 1
• Semaphore full initialized to the value 0
• Semaphore empty initialized to the value N
Bounded Buffer Problem (Cont.)
• The structure of the producer process
do {
// produce an item in nextp
wait (empty);
wait (mutex);
// add the item to the buffer
signal (mutex);
signal (full);
} while (TRUE);
Bounded Buffer Problem
• The structure of the consumer process
do {
wait (full);
wait (mutex);
// remove an item from buffer to nextc
signal (mutex);
signal (empty);
// consume the item in nextc
} while (TRUE);
Readers-Writers Problem
• A data set is shared among a number of concurrent processes
– Readers – only read the data set; they do not perform any
updates
– Writers – can both read and write
Problem – allow multiple readers to read at the same time
– Only one single writer can access the shared data at the
same time
• Several variations of how readers and writers are treated – all
involve priorities
• Shared Data
– Data set
– Semaphore mutex initialized to 1
– Semaphore wrt initialized to 1
– Integer readcount initialized to 0
Readers-Writers Problem (Cont.)
• The structure of a writer process
do {
wait (wrt) ;
// writing is performed
signal (wrt) ;
} while (TRUE);
Readers-Writers Problem (Cont.)
• The structure of a reader process
do {
wait (mutex) ;
readcount ++ ;
if (readcount == 1)
wait (wrt) ;
signal (mutex)
// reading is performed
wait (mutex) ;
readcount - - ;
if (readcount == 0)
signal (wrt) ;
signal (mutex) ;
} while (TRUE);
Readers-Writers Problem Variations
• First variation – no reader kept waiting unless writer
has permission to use shared object
• Second variation – once writer is ready, it performs
write asap
• Both may have starvation leading to even more
variations
• Problem is solved on some systems by kernel
providing reader-writer locks
Dining-Philosophers Problem
• Philosophers spend their
lives thinking and eating
• Don’t interact with their
neighbors, occasionally
try to pick up 2
chopsticks (one at a time)
to eat from bowl
– Need both to eat, then
release both when done In the case of 5 philosophers
Shared data
Bowl of rice (data set)
Semaphore chopstick [5]
initialized to 1
Dining-Philosophers Problem Algorithm
• The structure of Philosopher i:
do {
wait ( chopstick[i] );
wait ( chopStick[ (i + 1) % 5] );
// eat
signal ( chopstick[i] );
signal (chopstick[ (i + 1) % 5] );
// think
} while (TRUE);
• What is the problem with this algorithm?
Problems with Semaphores
• Incorrect use of semaphore operations:
– signal (mutex) …. wait (mutex)
– wait (mutex) … wait (mutex)
– Omitting of wait (mutex) or signal
(mutex) (or both)
• Deadlock and starvation
Monitors
• A high-level abstraction that provides a convenient and effective
mechanism for process synchronization
• Abstract data type, internal variables only accessible by code
within the procedure
• Only one process may be active within the monitor at a time
• But not powerful enough to model some synchronization schemes
monitor monitor-name
{ // shared variable declarations
procedure P1 (…) { …. }
procedure Pn (…) {……}
Initialization code (…) { … }
}
}
Schematic view of a Monitor
Condition Variables
• condition x, y;
• Two operations on a condition variable:
– x.wait () – a process that invokes the
operation is suspended until x.signal ()
– x.signal () – resumes one of processes (if any)
that invoked x.wait ()
• If no x.wait () on the variable, then it has no
effect on the variable
Monitor with Condition Variables
Condition Variables Choices
• If process P invokes x.signal (), with Q in x.wait () state, what should
happen next?
– If Q is resumed, then P must wait
• Options include
– Signal and wait – P waits until Q leaves monitor or waits for
another condition
– Signal and continue – Q waits until P leaves the monitor or waits
for another condition
– Both have pros and cons – language implementer can decide
– Monitors implemented in Concurrent Pascal compromise
• P executing signal immediately leaves the monitor, Q is
resumed
– Implemented in other languages including Mesa, C#, Java
• Each philosopher i invokes the operations pickup()
and putdown() in the following sequence:
DiningPhilosophers.pickup (i);
EAT
DiningPhilosophers.putdown (i);
• No deadlock, but starvation is possible
A monitor Solution to Dining Philosophers problem
A monitor Solution to Dining Philosophers problem
monitor DiningPhilosophers
{
enum { THINKING; HUNGRY, EATING) state [5] ;
condition self [5];
void pickup (int i) {
state[i] = HUNGRY;
test(i);
if (state[i] != EATING) self [i].wait;
}
void putdown (int i) {
state[i] = THINKING;
// test left and right neighbors
test((i + 4) % 5);
test((i + 1) % 5);
}
A monitor Solution to Dining Philosophers problem
void test (int i) {
if ( (state[(i + 4) % 5] != EATING) &&
(state[i] == HUNGRY) &&
(state[(i + 1) % 5] != EATING) ) {
state[i] = EATING ;
self[i].signal () ;
}
}
initialization_code() {
for (int i = 0; i < 5; i++)
state[i] = THINKING;
}
}
• Each philosopher i invokes the operations pickup()
and putdown() in the following sequence:
DiningPhilosophers.pickup (i);
EAT
DiningPhilosophers.putdown (i);
• No deadlock, but starvation is possible
A monitor Solution to Dining Philosophers problem
Monitor Implementation Using Semaphores
• Variables
semaphore mutex; // (initially = 1)
semaphore next; // (initially = 0)
int next_count = 0;
• Each procedure F will be replaced by
wait(mutex);
…
body of F;
…
if (next_count > 0)
signal(next)
else
signal(mutex);
• Mutual exclusion within a monitor is ensured
Monitor Implementation – Condition Variables
• For each condition variable x, we have:
semaphore x_sem; // (initially = 0)
int x_count = 0;
• The operation x.wait can be implemented as:
x-count++;
if (next_count > 0)
signal(next);
else
signal(mutex);
wait(x_sem);
x-count--;
Monitor Implementation (Cont.)
• The operation x.signal can be implemented as:
if (x-count > 0) {
next_count++;
signal(x_sem);
wait(next);
next_count--;
}
Resuming Processes within a Monitor
• If several processes queued on condition x, and
x.signal() executed, which should be resumed?
• FCFS frequently not adequate
• conditional-wait construct of the form x.wait(c)
– Where c is priority number
– Process with lowest number (highest priority) is
scheduled next
A Monitor to Allocate Single Resource
monitor ResourceAllocator
{
boolean busy;
condition x;
void acquire(int time) {
if (busy)
x.wait(time);
busy = TRUE;
}
void release() {
busy = FALSE;
x.signal();
}
initialization code() {
busy = FALSE;
}
}
A Monitor to Allocate Single Resource
• Each process, when requesting for allocation of the
resources, specifies maximum time it plans to use
the resource.
• A monitor allocates the resource to the process that
has the shortest time allocation request.
• A process that needs to access the resource in
question i.e.
R.acquire(t);
…
access the resource;
…
R.release()
Where R is ResourseAllocator
Operating System Concepts 104

More Related Content

PPT
Module-2Deadlock.ppt
PPT
Ch8 OS
 
PDF
Deadlock chapter ppt....................
PDF
ch07.pdf
PDF
Ch7 deadlocks
PPTX
protection and security in operating systems
PPTX
Chapter 6 - Operating System Deadlock.pptx
PPTX
deadlock in OS.pptx
Module-2Deadlock.ppt
Ch8 OS
 
Deadlock chapter ppt....................
ch07.pdf
Ch7 deadlocks
protection and security in operating systems
Chapter 6 - Operating System Deadlock.pptx
deadlock in OS.pptx

Similar to Deadlock in Real Time operating Systempptx (20)

PPT
Deadlock
PPTX
Lecture 6- Deadlocks (1) (1).pptx
PPT
Deadlock principles in operating systems
PPTX
Lecture 6- Deadlocks.pptx
PDF
CH07.pdf
PDF
Chapter 5(five).pdf
PPT
Chapter 7fvsdfghjksdfghjkfghjsdfghjdfghjsdfghjk.ppt
PPT
deadlock part5 unit 2.ppt
PPTX
OSLec14&15(Deadlocksinopratingsystem).pptx
PPT
deadlock.ppt
PPTX
Deadlock - An Operating System Concept.pptx
PPT
Operating System
PPT
14th November - Deadlock Prevention, Avoidance.ppt
PPT
Chapter 7 - Deadlocks
PPT
Mch7 deadlock
PDF
deadlocks for Engenerring for he purpose
PPT
Deadlock
PPT
Deadlocks
PPTX
Module 3 Deadlocks.pptx
PPT
Galvin-operating System(Ch8)
Deadlock
Lecture 6- Deadlocks (1) (1).pptx
Deadlock principles in operating systems
Lecture 6- Deadlocks.pptx
CH07.pdf
Chapter 5(five).pdf
Chapter 7fvsdfghjksdfghjkfghjsdfghjdfghjsdfghjk.ppt
deadlock part5 unit 2.ppt
OSLec14&15(Deadlocksinopratingsystem).pptx
deadlock.ppt
Deadlock - An Operating System Concept.pptx
Operating System
14th November - Deadlock Prevention, Avoidance.ppt
Chapter 7 - Deadlocks
Mch7 deadlock
deadlocks for Engenerring for he purpose
Deadlock
Deadlocks
Module 3 Deadlocks.pptx
Galvin-operating System(Ch8)
Ad

More from ssuserca5764 (12)

PDF
5G-6G_Faculty Developmentand Training-2024.pdf
PPT
RENEWABLE SOLAR ENERGY solarenergypartII.ppt
PPT
Introduction to bioenery and biomass.ppt
PDF
Renewableandnon renewable energysources.pdf
PPTX
Principles of Energy conservation and audit
PPTX
Non conventional Energy Sources introduction
PDF
Digital Communication fundamentals .pdf
PPTX
Combinational logic circuits design and implementation
PPT
Climate Change and Ozone Depletion.ppt
PPTX
Current collction system.pptx
PPTX
Energy conservation Act 2001.pptx
PPTX
PAPR reduction techniques in OFDM.pptx
5G-6G_Faculty Developmentand Training-2024.pdf
RENEWABLE SOLAR ENERGY solarenergypartII.ppt
Introduction to bioenery and biomass.ppt
Renewableandnon renewable energysources.pdf
Principles of Energy conservation and audit
Non conventional Energy Sources introduction
Digital Communication fundamentals .pdf
Combinational logic circuits design and implementation
Climate Change and Ozone Depletion.ppt
Current collction system.pptx
Energy conservation Act 2001.pptx
PAPR reduction techniques in OFDM.pptx
Ad

Recently uploaded (20)

PDF
Performance, energy consumption and costs: a comparative analysis of automati...
PPTX
SE unit 1.pptx by d.y.p.akurdi aaaaaaaaaaaa
PDF
Mechanics of materials week 2 rajeshwari
PPTX
22ME926Introduction to Business Intelligence and Analytics, Advanced Integrat...
PDF
Principles of operation, construction, theory, advantages and disadvantages, ...
PDF
AIGA 012_04 Cleaning of equipment for oxygen service_reformat Jan 12.pdf
PPTX
SC Robotics Team Safety Training Presentation
PPT
Module_1_Lecture_1_Introduction_To_Automation_In_Production_Systems2023.ppt
PPT
Unit - I.lathemachnespct=ificationsand ppt
PPTX
Unit IImachinemachinetoolopeartions.pptx
PDF
MACCAFERRY GUIA GAVIONES TERRAPLENES EN ESPAÑOL
PPT
Programmable Logic Controller PLC and Industrial Automation
PPTX
Module1.pptxrjkeieuekwkwoowkemehehehrjrjrj
PDF
ST MNCWANGO P2 WIL (MEPR302) FINAL REPORT.pdf
PPTX
Solar energy pdf of gitam songa hemant k
PDF
IAE-V2500 Engine for Airbus Family 319/320
PDF
V2500 Owner and Operatore Guide for Airbus
PPTX
INTERNET OF THINGS - EMBEDDED SYSTEMS AND INTERNET OF THINGS
PDF
LS-6-Digital-Literacy (1) K12 CURRICULUM .pdf
PDF
Software defined netwoks is useful to learn NFV and virtual Lans
Performance, energy consumption and costs: a comparative analysis of automati...
SE unit 1.pptx by d.y.p.akurdi aaaaaaaaaaaa
Mechanics of materials week 2 rajeshwari
22ME926Introduction to Business Intelligence and Analytics, Advanced Integrat...
Principles of operation, construction, theory, advantages and disadvantages, ...
AIGA 012_04 Cleaning of equipment for oxygen service_reformat Jan 12.pdf
SC Robotics Team Safety Training Presentation
Module_1_Lecture_1_Introduction_To_Automation_In_Production_Systems2023.ppt
Unit - I.lathemachnespct=ificationsand ppt
Unit IImachinemachinetoolopeartions.pptx
MACCAFERRY GUIA GAVIONES TERRAPLENES EN ESPAÑOL
Programmable Logic Controller PLC and Industrial Automation
Module1.pptxrjkeieuekwkwoowkemehehehrjrjrj
ST MNCWANGO P2 WIL (MEPR302) FINAL REPORT.pdf
Solar energy pdf of gitam songa hemant k
IAE-V2500 Engine for Airbus Family 319/320
V2500 Owner and Operatore Guide for Airbus
INTERNET OF THINGS - EMBEDDED SYSTEMS AND INTERNET OF THINGS
LS-6-Digital-Literacy (1) K12 CURRICULUM .pdf
Software defined netwoks is useful to learn NFV and virtual Lans

Deadlock in Real Time operating Systempptx

  • 1. Chapter No 5: Deadlocks • System Model • Deadlock Characterization • Methods for Handling Deadlocks • Deadlock Prevention • Deadlock Avoidance • Deadlock Detection • Recovery from Deadlock • Combined Approach to Deadlock Handling Operating System Concepts 1
  • 2. The Deadlock Problem • A set of blocked processes each holding a resource and waiting to acquire a resource held by another process in the set. • Example – System has 2 tape drives. – P1 and P2 each hold one tape drive and each needs another one. • Example – semaphores A and B, initialized to 1 P0 P1 wait (A); wait(B) wait (B); wait(A) Operating System Concepts 2
  • 3. Bridge Crossing Example • Traffic only in one direction. • Each section of a bridge can be viewed as a resource. • If a deadlock occurs, it can be resolved if one car backs up (preempt resources and rollback). • Several cars may have to be backed upif a deadlock occurs. • Starvation is possible. Operating System Concepts 3
  • 4. System Model • Resource types R1, R2, . . ., Rm CPU cycles, memory space, I/O devices • Each resource type Ri has Wi instances. • Each process utilizes a resource as follows: – request – use – release Operating System Concepts 4
  • 5. Deadlock Characterization • Mutual exclusion: only one process at a time can use a resource. • Hold and wait: a process holding at least one resource is waiting to acquire additional resources held by other processes. • No preemption: a resource can be released only voluntarily by the process holding it, after that process has completed its task. • Circular wait: there exists a set {P0, P1, …, P0} of waiting processes such that P0 is waiting for a resource that is held by P1, P1 is waiting for a resource that is held by P2, …, Pn–1 is waiting for a resource that is held by Pn, and P0 is waiting for a resource that is held by P0. Operating System Concepts Deadlock can arise if four conditions hold simultaneously. 5
  • 6. Resource-Allocation Graph • V is partitioned into two types: – P = {P1, P2, …, Pn}, the set consisting of all the processes in the system. – R = {R1, R2, …, Rm}, the set consisting of all resource types in the system. • request edge – directed edge P1  Rj • assignment edge – directed edge Rj  Pi Operating System Concepts A set of vertices V and a set of edges E. 6
  • 7. Resource-Allocation Graph (Cont.) • Process • Resource Type with 4 instances • Pi requests instance of Rj • Pi is holding an instance of Rj Operating System Concepts Pi Pi Rj Rj 7
  • 8. Example of a Resource Allocation Graph Operating System Concepts 8
  • 9. Resource Allocation Graph With A Deadlock Operating System Concepts 9
  • 10. Resource Allocation Graph With A Cycle But No Deadlock Operating System Concepts 10
  • 11. Basic Facts • If graph contains no cycles  no deadlock. • If graph contains a cycle  – if only one instance per resource type, then deadlock. – if several instances per resource type, possibility of deadlock. Operating System Concepts 11
  • 12. Methods for Handling Deadlocks • Ensure that the system will never enter a deadlock state. • Allow the system to enter a deadlock state and then recover. • Ignore the problem and pretend that deadlocks never occur in the system; used by most operating systems, including UNIX. Operating System Concepts 12
  • 13. Deadlock Prevention • Mutual Exclusion – not required for sharable resources; must hold for nonsharable resources. • Hold and Wait – must guarantee that whenever a process requests a resource, it does not hold any other resources. – Require process to request and be allocated all its resources before it begins execution, or allow process to request resources only when the process has none. – Low resource utilization; starvation possible. Operating System Concepts Restrain the ways request can be made. 13
  • 14. Deadlock Prevention (Cont.) • No Preemption – – If a process that is holding some resources requests another resource that cannot be immediately allocated to it, then all resources currently being held are released. – Preempted resources are added to the list of resources for which the process is waiting. – Process will be restarted only when it can regain its old resources, as well as the new ones that it is requesting. • Circular Wait – impose a total ordering of all resource types, and require that each process requests resources in an increasing order of enumeration. Operating System Concepts 14
  • 15. Deadlock Avoidance • Simplest and most useful model requires that each process declare the maximum number of resources of each type that it may need. • The deadlock-avoidance algorithm dynamically examines the resource-allocation state to ensure that there can never be a circular- wait condition. • Resource-allocation state is defined by the number of available and allocated resources, and the maximum demands of the processes. Operating System Concepts Requires that the system has some additional a priori information available. 15
  • 16. Safe State • When a process requests an available resource, system must decide if immediate allocation leaves the system in a safe state. • System is in safe state if there exists a safe sequence of all processes. • Sequence <P1, P2, …, Pn> is safe if for each Pi, the resources that Pi can still request can be satisfied by currently available resources + resources held by all the Pj, with j<I. – If Pi resource needs are not immediately available, then Pi can wait until all Pj have finished. – When Pj is finished, Pi can obtain needed resources, execute, return allocated resources, and terminate. – When Pi terminates, Pi+1 can obtain its needed resources, and so on. Operating System Concepts 16
  • 17. Basic Facts • If a system is in safe state  no deadlocks. • If a system is in unsafe state  possibility of deadlock. • Avoidance  ensure that a system will never enter an unsafe state. Operating System Concepts 17
  • 18. Safe, unsafe , deadlock state spaces Operating System Concepts 18
  • 19. Resource-Allocation Graph Algorithm • Claim edge Pi  Rj indicated that process Pj may request resource Rj; represented by a dashed line. • Claim edge converts to request edge when a process requests a resource. • When a resource is released by a process, assignment edge reconverts to a claim edge. • Resources must be claimed a priori in the system. Operating System Concepts 19
  • 20. Resource-Allocation Graph For Deadlock Avoidance Operating System Concepts 20
  • 21. Unsafe State In A Resource-Allocation Graph Operating System Concepts 21
  • 22. Banker’s Algorithm • Multiple instances. • Each process must a priori claim maximum use. • When a process requests a resource it may have to wait. • When a process gets all its resources it must return them in a finite amount of time. Operating System Concepts 22
  • 23. Data Structures for the Banker’s Algorithm • Available: Vector of length m. If available [j] = k, there are k instances of resource type Rj available. • Max: n x m matrix. If Max [i,j] = k, then process Pi may request at most k instances of resource type Rj. • Allocation: n x m matrix. If Allocation[i,j] = k then Pi is currently allocated k instances of Rj. • Need: n x m matrix. If Need[i,j] = k, then Pi may need k more instances of Rj to complete its task. Need [i,j] = Max[i,j] – Allocation [i,j]. Operating System Concepts Let n = number of processes, and m = number of resources types. 23
  • 24. Safety Algorithm 1. Let Work and Finish be vectors of length m and n, respectively. Initialize: Work := Available Finish [i] = false for i - 1,3, …, n. 2. Find and i such that both: (a) Finish [i] = false (b) Needi  Work If no such i exists, go to step 4. 3. Work := Work + Allocationi Finish[i] := true go to step 2. 4. If Finish [i] = true for all i, then the system is in a safe state. Operating System Concepts 24
  • 25. Resource-Request Algorithm for Process Pi Requesti = request vector for process Pi. If Requesti [j] = k then process Pi wants k instances of resource type Rj. 1. If Requesti  Needi go to step 2. Otherwise, raise error condition, since process has exceeded its maximum claim. 2. If Requesti  Available, go to step 3. Otherwise Pi must wait, since resources are not available. 3. Pretend to allocate requested resources to Pi by modifying the state as follows: Available := Available = Requesti; Allocationi := Allocationi + Requesti; Needi := Needi – Requesti;; • If safe  the resources are allocated to Pi. • If unsafe  Pi must wait, and the old resource-allocation state is restored Operating System Concepts 25
  • 26. Example of Banker’s Algorithm • 5 processes P0 through P4; 3 resource types A (10 instances), B (5instances, and C (7 instances). • Snapshot at time T0: Allocation Max Available A B C A B C A B C P0 0 1 0 7 5 3 3 3 2 P1 2 0 0 3 2 2 P2 3 0 2 9 0 2 P3 2 1 1 2 2 2 P4 0 0 2 4 3 3 Operating System Concepts 26
  • 27. Example (Cont.) • The content of the matrix. Need is defined to be Max – Allocation. Need A B C P0 7 4 3 P1 1 2 2 P2 6 0 0 P3 0 1 1 P4 4 3 1 • The system is in a safe state since the sequence < P1, P3, P4, P2, P0> satisfies safety criteria. Operating System Concepts 27
  • 28. Example (Cont.): P1 request (1,0,2) • Check that Request  Available (that is, (1,0,2)  (3,3,2)  true. Allocation Need Available A B C A B C A B C P0 0 1 0 7 4 3 2 3 0 P1 3 0 2 0 2 0 P2 3 0 1 6 0 0 P3 2 1 1 0 1 1 P4 0 0 2 4 3 1 • Executing safety algorithm shows that sequence <P1, P3, P4, P0, P2> satisfies safety requirement. • Can request for (3,3,0) by P4 be granted? • Can request for (0,2,0) by P0 be granted? Operating System Concepts 28
  • 29. Deadlock Detection • Allow system to enter deadlock state • Detection algorithm • Recovery scheme Operating System Concepts 29
  • 30. Single Instance of Each Resource Type • Maintain wait-for graph – Nodes are processes. – Pi  Pj if Pi is waiting for Pj. • Periodically invoke an algorithm that searches for acycle in the graph. • An algorithm to detect a cycle in a graph requires an order of n2 operations, where n is the number of vertices in the graph. Operating System Concepts 30
  • 31. Resource-Allocation Graph And Wait-for Graph Operating System Concepts Resource-Allocation Graph Corresponding wait-for graph 31
  • 32. Several Instances of a Resource Type • Available: A vector of length m indicates the number of available resources of each type. • Allocation: An n x m matrix defines the number of resources of each type currently allocated to each process. • Request: An n x m matrix indicates the current request of each process. If Request [ij] = k, then process Pi is requesting k more instances of resource type. Rj. Operating System Concepts 32
  • 33. Detection Algorithm 1. Let Work and Finish be vectors of length m and n, respectively Initialize: (a) Work :- Available (b) For i = 1,2, …, n, if Allocationi  0, then Finish[i] := false;otherwise, Finish[i] := true. 2. Find an index i such that both: (a) Finish[i] = false (b) Requesti  Work If no such i exists, go to step 4. Operating System Concepts 33
  • 34. Detection Algorithm (Cont.) 3. Work := Work + Allocationi Finish[i] := true go to step 2. 4. If Finish[i] = false, for some i, 1  i  n, then the system is in deadlock state. Moreover, if Finish[i] = false, then Pi is deadlocked. Operating System Concepts Algorithm requires an order of m x n2 operations to detect whether the system is in deadlocked state. 34
  • 35. Example of Detection Algorithm • Five processes P0 through P4; three resource types A (7 instances), B (2 instances), and C (6 instances). • Snapshot at time T0: Allocation Request Available A B C A B C A B C P0 0 1 0 0 0 0 0 0 0 P1 2 0 0 2 0 2 P2 3 0 3 0 0 0 P3 2 1 1 1 0 0 P4 0 0 2 0 0 2 • Sequence <P0, P2, P3, P1, P4> will result in Finish[i] = true for all i. Operating System Concepts 35
  • 36. Example (Cont.) • P2 requests an additional instance of type C. Request A B C P0 0 0 0 P1 2 0 1 P2 0 0 1 P3 1 0 0 P4 0 0 2 • State of system? – Can reclaim resources held by process P0, but insufficient resources to fulfill other processes; requests. – Deadlock exists, consisting of processes P1, P2, P3, and P4. Operating System Concepts 36
  • 37. Detection-Algorithm Usage • When, and how often, to invoke depends on: – How often a deadlock is likely to occur? – How many processes will need to be rolled back? • one for each disjoint cycle • If detection algorithm is invoked arbitrarily, there may be many cycles in the resource graph and so we would not be able to tell which of the many deadlocked processes “caused” the deadlock. Operating System Concepts 37
  • 38. Recovery from Deadlock: Process Termination • Abort all deadlocked processes. • Abort one process at a time until the deadlock cycle is eliminated. • In which order should we choose to abort? – Priority of the process. – How long process has computed, and how much longer to completion. – Resources the process has used. – Resources process needs to complete. – How many processes will need to be terminated. – Is process interactive or batch? Operating System Concepts 38
  • 39. Recovery from Deadlock: Resource Preemption • Selecting a victim – minimize cost. • Rollback – return to some safe state, restart process fro that state. • Starvation – same process may always be picked as victim, include number of rollback in cost factor. Operating System Concepts 39
  • 40. Combined Approach to Deadlock Handling • Combine the three basic approaches – prevention – avoidance – detection allowing the use of the optimal approach for each of resources in the system. • Partition resources into hierarchically ordered classes. • Use most appropriate technique for handling deadlocks within each class. Operating System Concepts 40
  • 41. What is a Thread? • A thread is a path of execution within a process. A process can contain multiple threads. • Why Multithreading? A thread is also known as lightweight process. The idea is to achieve parallelism by dividing a process into multiple threads. • For example, in a browser, multiple tabs can be different threads. MS Word uses multiple threads: one thread to format the text, another thread to process inputs, etc. Operating System Concepts 41
  • 42. Process vs Thread? • The primary difference is that threads within the same process run in a shared memory space, while processes run in separate memory spaces. Threads are not independent of one another like processes are, and as a result threads share with other threads their code section, data section, and OS resources (like open files and signals). But, like process, a thread has its own program counter (PC), register set, and stack space. Operating System Concepts 42
  • 43. Advantages of Thread over Process • 1. Responsiveness: If the process is divided into multiple threads, if one thread completes its execution, then its output can be immediately returned. • 2. Faster context switch: Context switch time between threads is lower compared to process context switch. Process context switching requires more overhead from the CPU. • 3. Effective utilization of multiprocessor system: If we have multiple threads in a single process, then we can schedule multiple threads on multiple processor. This will make process execution faster. Operating System Concepts 43
  • 44. Advantages of Thread over Process • 4. Resource sharing: Resources like code, data, and files can be shared among all threads within a process. Note: stack and registers can’t be shared among the threads. Each thread has its own stack and registers. • 5. Communication: Communication between multiple threads is easier, as the threads shares common address space. while in process we have to follow some specific communication technique for communication between two process. • 6. Enhanced throughput of the system: If a process is divided into multiple threads, and each thread function is considered as one job, then the number of jobs completed per unit of time is increased, thus increasing the throughput of the system. Operating System Concepts 44
  • 45. Benefits of Threads • Enhanced system throughput: The number of jobs completed per unit time increases when the process is divided into numerous threads, and each thread is viewed as a job. As a result, the system’s throughput likewise increases. • Effective use of a Multiprocessor system: You can schedule multiple threads in multiple processors when you have many threads in a single process. • Faster context switch: The thread context switching time is shorter than the process context switching time. The process context switch adds to the CPU’s workload. Operating System Concepts 45
  • 46. Benefits of Threads • Responsiveness: When a process is divided into many threads, and each of them completes its execution, then the process can be responded to as quickly as possible. • Communication: Multiple-thread communication is straightforward because the threads use the same address space, while communication between two processes is limited to a few exclusive communication mechanisms. • Resource sharing: Code, data, and files, for example, can be shared among all threads in a process. Note that threads cannot share the stack or register. Each thread has its own stack and register. Operating System Concepts 46
  • 47. Why Do We Need Thread?  Creating a new thread in a current process requires significantly less time than creating a new process.  Threads can share common data without needing to communicate with each other.  When working with threads, context switching is faster.  Terminating a thread requires less time than terminating a process. Operating System Concepts 47
  • 48. Types of Threads • There are two types of threads. User Level Thread Kernel Level Thread Operating System Concepts 48
  • 49. User-Level Thread • The user-level thread is ignored by the operating system. User threads are simple to implement and are done so by the user. • The entire process is blocked if a user executes a user-level operation of thread blocking. • The kernel-level thread is completely unaware of the user-level thread. • User-level threads are managed as single- threaded processes by the kernel-level thread. Operating System Concepts 49
  • 50. User-Level Thread • The user-level thread is ignored by the operating system. User threads are simple to implement and are done so by the user. • The entire process is blocked if a user executes a user-level operation of thread blocking. • The kernel-level thread is completely unaware of the user-level thread. • User-level threads are managed as single- threaded processes by the kernel-level thread. Operating System Concepts 50
  • 51. Pros and Cons of ULT • Pros 1. User threads are easier to implement than kernel threads. 2. Threads at the user level can be used in operating systems that do not allow threads at the kernel level. 3. It is more effective and efficient. 4. Context switching takes less time than kernel threads. 5. It does not necessitate any changes to the operating system. 6. The representation of user-level threads is relatively straightforward. The user-level process’s address space contains the register, stack, PC, and mini thread control blocks. 7. Threads may be easily created, switched, and synchronised without the need for process interaction. • Cons 1. Threads at the user level are not coordinated with the kernel. 2. The entire operation is halted if a thread creates a page fault. Operating System Concepts 51
  • 52. Kernel-Level Thread • The operating system is recognized by the kernel thread. • Each thread and process in the kernel-level thread has its own thread control block as well as process control block in the system. • The operating system implements the kernel-level thread. The kernel is aware of all threads and controls them. • The kernel-level thread provides a system call for user- space thread creation and management. • Kernel threads are more complex to build than user threads. • The kernel thread’s context switch time is longer. • The execution of the Banky thread can continue in case a kernel thread performs a blocking operation. Operating System Concepts 52
  • 53. Pros and Cons of KLT • Pros 1. All threads are completely aware of the kernel-level thread. 2. The scheduler may decide to devote extra CPU time to threads with large numerical values. 3. Applications that happen to block the frequency should use the kernel-level thread. • Cons 1. All threads are managed and scheduled by the kernel thread. 2. Kernel threads are more complex to build than user threads. 3. Kernel-level threads are slower than user-level threads. Operating System Concepts 53
  • 54. Interprocess Synchronization • Background • The Critical-Section Problem • Synchronization Hardware • Semaphores • Classical Problems of Synchronization • Critical Regions • Monitors • Synchronization in Solaris 2 • Atomic Transactions Operating System Concepts 54
  • 55. Background • Concurrent access to shared data may result in data inconsistency. • Maintaining data consistency requires mechanisms to ensure the orderly execution of cooperating processes. • Shared-memory solution to bounded-butter problem (Chapter 4) allows at most n – 1 items in buffer at the same time. A solution, where all N buffers are used is not simple. – Suppose that we modify the producer-consumer code by adding a variable counter, initialized to 0 and incremented each time a new item is added to the buffer Operating System Concepts 55
  • 56. What is Process Synchronization in OS? • An operating system is a software that manages all applications on a device and basically helps in the smooth functioning of our computer. • Because of this reason, the operating system has to perform many tasks, and sometimes simultaneously. • This isn't usually a problem unless these simultaneously occurring processes use a common resource. Operating System Concepts 56
  • 57. Example of Process Synchronization in OS • For example, consider a bank that stores the account balance of each customer in the same database. • Now suppose you initially have x rupees in your account. Now, you take out some amount of money from your bank account, and at the same time, someone tries to look at the amount of money stored in your account. • As you are taking out some money from your account, after the transaction, the total balance left will be lower than x. • But, the transaction takes time, and hence the person reads x as your account balance which leads to inconsistent data. • If in some way, we could make sure that only one process occurs at a time, we could ensure consistent data. Operating System Concepts 57
  • 58. Bounded-Buffer • Shared data type item = … ; var buffer array [0..n-1] of item; in, out: 0..n-1; counter: 0..n; in, out, counter := 0; • Producer process repeat … produce an item in nextp … while counter = n do no-op; buffer [in] := nextp; in := in + 1 mod n; counter := counter +1; until false; Operating System Concepts 58
  • 59. Bounded-Buffer (Cont.) • Consumer process repeat while counter = 0 do no-op; nextc := buffer [out]; out := out + 1 mod n; counter := counter – 1; … consume the item in nextc … until false; • The statements: – counter := counter + 1; – counter := counter - 1; must be executed atomically. Operating System Concepts 59
  • 60. The Critical-Section Problem • n processes all competing to use some shared data • Each process has a code segment, called critical section, in which the shared data is accessed. • Problem – ensure that when one process is executing in its critical section, no other process is allowed to execute in its critical section. • Structure of process Pi repeat entry section critical section exit section reminder section until false; Operating System Concepts 60
  • 61. Solution to Critical-Section Problem 1. Mutual Exclusion. If process Pi is executing in its critical section, then no other processes can be executing in their critical sections. 2. Progress. If no process is executing in its critical section and there exist some processes that wish to enter their critical section, then the selection of the processes that will enter the critical section next cannot be postponed indefinitely. 3. Bounded Waiting. A bound must exist on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted.  Assume that each process executes at a nonzero speed  No assumption concerning relative speed of the n processes. Operating System Concepts 61
  • 62. Initial Attempts to Solve Problem • Only 2 processes, P0 and P1 • General structure of process Pi (other process Pj) repeat entry section critical section exit section reminder section until false; • Processes may share some common variables to synchronize their actions. Operating System Concepts 62
  • 63. Peterson’s Solution • Two process solution • Assume that the LOAD and STORE instructions are atomic; that is, cannot be interrupted • The two processes share two variables: – int turn; – Boolean flag[2] • The variable turn indicates whose turn it is to enter the critical section • The flag array is used to indicate if a process is ready to enter the critical section. flag[i] = true implies that process Pi is ready
  • 64. do { flag[i] = TRUE; turn = j; while (flag[j] && turn == j); critical section flag[i] = FALSE; remainder section } while (TRUE); • Provable that 1. Mutual exclusion is preserved 2. Progress requirement is satisfied 3. Bounded-waiting requirement is met Algorithm for Process Pi
  • 65. Synchronization Hardware • Many systems provide hardware support for critical section code • Uniprocessors – could disable interrupts – Currently running code would execute without preemption – Generally too inefficient on multiprocessor systems • Operating systems using this not broadly scalable • Modern machines provide special atomic hardware instructions • Atomic = non-interruptable – Either test memory word and set value – Or swap contents of two memory words
  • 66. do { acquire lock critical section release lock remainder section } while (TRUE); Solution to Critical-section Problem Using Locks
  • 67. Synchronization Hardware • Many systems provide hardware support for critical section code • Uniprocessors – could disable interrupts – Currently running code would execute without preemption – Generally too inefficient on multiprocessor systems • Operating systems using this not broadly scalable • Modern machines provide special atomic hardware instructions • Atomic = non-interruptable – Either test memory word and set value – Or swap contents of two memory words
  • 68. TestAndSet Instruction boolean TestAndSet (boolean *target) { boolean rv = *target; *target = TRUE; return rv: }
  • 69. Solution using TestAndSet • Shared boolean variable lock, initialized to FALSE • Solution: do { while ( TestAndSet (&lock )) ; // do nothing // critical section lock = FALSE; // remainder section } while (TRUE);
  • 70. Swap Instruction void Swap (boolean *a, boolean *b) { boolean temp = *a; *a = *b; *b = temp: }
  • 71. Solution using Swap • Shared Boolean variable lock initialized to FALSE; Each process has a local Boolean variable key • Solution: do { key = TRUE; while ( key == TRUE) Swap (&lock, &key ); // critical section lock = FALSE; // remainder section } while (TRUE);
  • 72. Bounded-waiting Mutual Exclusion with TestAndSet() do { waiting[i] = TRUE; key = TRUE; while (waiting[i] && key) key = TestAndSet(&lock); waiting[i] = FALSE; // critical section j = (i + 1) % n; while ((j != i) && !waiting[j]) j = (j + 1) % n; if (j == i) lock = FALSE; else waiting[j] = FALSE; // remainder section } while (TRUE);
  • 73. Semaphore • Synchronization tool that does not require busy waiting • Semaphore S – integer variable • Two standard operations modify • S: wait() and signal() – Originally called P() and V() • Less complicated • Can only be accessed via two indivisible (atomic) operations – wait (S) { while S <= 0 ; // no-op S--; } – signal (S) { S++; }
  • 74. Semaphore as General Synchronization Tool • Counting semaphore – integer value can range over an unrestricted domain • Binary semaphore – integer value can range only between 0 and 1; can be simpler to implement – Also known as mutex locks • Can implement a counting semaphore S as a binary semaphore • Provides mutual exclusion Semaphore mutex; // initialized to 1 do { wait (mutex); // Critical Section signal (mutex); // remainder section } while (TRUE);
  • 75. Semaphore Implementation • Must guarantee that no two processes can execute wait () and signal () on the same semaphore at the same time • Thus, implementation becomes the critical section problem where the wait and signal code are placed in the critical section – Could now have busy waiting in critical section implementation • But implementation code is short • Little busy waiting if critical section rarely occupied • Note that applications may spend lots of time in critical sections and therefore this is not a good solution Busy waiting wastes CPU cycle that some other processes might be able to use productively this type of semaphore also known as spinlock because process spins while waiting for lock
  • 76. Semaphore Implementation with no Busy waiting • With each semaphore there is an associated waiting queue • Each entry in a waiting queue has two data items: – value (of type integer) – pointer to next record in the list • Two operations: – block – place the process invoking the operation on the appropriate waiting queue – wakeup – remove one of processes in the waiting queue and place it in the ready queue
  • 77. Semaphore Implementation with no Busy waiting • Definition of semaphore typedef struct { int value; struct process * list; }semaphore; • Implementation of wait: wait(semaphore *S) { S->value--; if (S->value < 0) { add this process to S->list; block(); } } • Implementation of signal: signal(semaphore *S) { S->value++; if (S->value <= 0) { remove a process P from S->list; wakeup(P); } }
  • 78. Deadlock and Starvation • Deadlock – two or more processes are waiting indefinitely for an event that can be caused by only one of the waiting processes • Let S and Q be two semaphores initialized to 1 P0 P1 wait (S); wait (Q); wait (Q); wait (S); . . signal (S); signal (Q); signal (Q); signal (S); • Starvation – indefinite blocking – A process may never be removed from the semaphore queue in which it is suspended • Priority Inversion – Scheduling problem when lower-priority process holds a lock needed by higher-priority process – Solved via priority-inheritance protocol
  • 79. Classical Problems of Synchronization • Classical problems used to test newly-proposed synchronization schemes – Bounded-Buffer Problem – Readers and Writers Problem – Dining-Philosophers Problem
  • 80. Bounded-Buffer Problem • N buffers, each can hold one item • Semaphore mutex initialized to the value 1 • Semaphore full initialized to the value 0 • Semaphore empty initialized to the value N
  • 81. Bounded Buffer Problem (Cont.) • The structure of the producer process do { // produce an item in nextp wait (empty); wait (mutex); // add the item to the buffer signal (mutex); signal (full); } while (TRUE);
  • 82. Bounded Buffer Problem • The structure of the consumer process do { wait (full); wait (mutex); // remove an item from buffer to nextc signal (mutex); signal (empty); // consume the item in nextc } while (TRUE);
  • 83. Readers-Writers Problem • A data set is shared among a number of concurrent processes – Readers – only read the data set; they do not perform any updates – Writers – can both read and write Problem – allow multiple readers to read at the same time – Only one single writer can access the shared data at the same time • Several variations of how readers and writers are treated – all involve priorities • Shared Data – Data set – Semaphore mutex initialized to 1 – Semaphore wrt initialized to 1 – Integer readcount initialized to 0
  • 84. Readers-Writers Problem (Cont.) • The structure of a writer process do { wait (wrt) ; // writing is performed signal (wrt) ; } while (TRUE);
  • 85. Readers-Writers Problem (Cont.) • The structure of a reader process do { wait (mutex) ; readcount ++ ; if (readcount == 1) wait (wrt) ; signal (mutex) // reading is performed wait (mutex) ; readcount - - ; if (readcount == 0) signal (wrt) ; signal (mutex) ; } while (TRUE);
  • 86. Readers-Writers Problem Variations • First variation – no reader kept waiting unless writer has permission to use shared object • Second variation – once writer is ready, it performs write asap • Both may have starvation leading to even more variations • Problem is solved on some systems by kernel providing reader-writer locks
  • 87. Dining-Philosophers Problem • Philosophers spend their lives thinking and eating • Don’t interact with their neighbors, occasionally try to pick up 2 chopsticks (one at a time) to eat from bowl – Need both to eat, then release both when done In the case of 5 philosophers Shared data Bowl of rice (data set) Semaphore chopstick [5] initialized to 1
  • 88. Dining-Philosophers Problem Algorithm • The structure of Philosopher i: do { wait ( chopstick[i] ); wait ( chopStick[ (i + 1) % 5] ); // eat signal ( chopstick[i] ); signal (chopstick[ (i + 1) % 5] ); // think } while (TRUE); • What is the problem with this algorithm?
  • 89. Problems with Semaphores • Incorrect use of semaphore operations: – signal (mutex) …. wait (mutex) – wait (mutex) … wait (mutex) – Omitting of wait (mutex) or signal (mutex) (or both) • Deadlock and starvation
  • 90. Monitors • A high-level abstraction that provides a convenient and effective mechanism for process synchronization • Abstract data type, internal variables only accessible by code within the procedure • Only one process may be active within the monitor at a time • But not powerful enough to model some synchronization schemes monitor monitor-name { // shared variable declarations procedure P1 (…) { …. } procedure Pn (…) {……} Initialization code (…) { … } } }
  • 91. Schematic view of a Monitor
  • 92. Condition Variables • condition x, y; • Two operations on a condition variable: – x.wait () – a process that invokes the operation is suspended until x.signal () – x.signal () – resumes one of processes (if any) that invoked x.wait () • If no x.wait () on the variable, then it has no effect on the variable
  • 94. Condition Variables Choices • If process P invokes x.signal (), with Q in x.wait () state, what should happen next? – If Q is resumed, then P must wait • Options include – Signal and wait – P waits until Q leaves monitor or waits for another condition – Signal and continue – Q waits until P leaves the monitor or waits for another condition – Both have pros and cons – language implementer can decide – Monitors implemented in Concurrent Pascal compromise • P executing signal immediately leaves the monitor, Q is resumed – Implemented in other languages including Mesa, C#, Java
  • 95. • Each philosopher i invokes the operations pickup() and putdown() in the following sequence: DiningPhilosophers.pickup (i); EAT DiningPhilosophers.putdown (i); • No deadlock, but starvation is possible A monitor Solution to Dining Philosophers problem
  • 96. A monitor Solution to Dining Philosophers problem monitor DiningPhilosophers { enum { THINKING; HUNGRY, EATING) state [5] ; condition self [5]; void pickup (int i) { state[i] = HUNGRY; test(i); if (state[i] != EATING) self [i].wait; } void putdown (int i) { state[i] = THINKING; // test left and right neighbors test((i + 4) % 5); test((i + 1) % 5); }
  • 97. A monitor Solution to Dining Philosophers problem void test (int i) { if ( (state[(i + 4) % 5] != EATING) && (state[i] == HUNGRY) && (state[(i + 1) % 5] != EATING) ) { state[i] = EATING ; self[i].signal () ; } } initialization_code() { for (int i = 0; i < 5; i++) state[i] = THINKING; } }
  • 98. • Each philosopher i invokes the operations pickup() and putdown() in the following sequence: DiningPhilosophers.pickup (i); EAT DiningPhilosophers.putdown (i); • No deadlock, but starvation is possible A monitor Solution to Dining Philosophers problem
  • 99. Monitor Implementation Using Semaphores • Variables semaphore mutex; // (initially = 1) semaphore next; // (initially = 0) int next_count = 0; • Each procedure F will be replaced by wait(mutex); … body of F; … if (next_count > 0) signal(next) else signal(mutex); • Mutual exclusion within a monitor is ensured
  • 100. Monitor Implementation – Condition Variables • For each condition variable x, we have: semaphore x_sem; // (initially = 0) int x_count = 0; • The operation x.wait can be implemented as: x-count++; if (next_count > 0) signal(next); else signal(mutex); wait(x_sem); x-count--;
  • 101. Monitor Implementation (Cont.) • The operation x.signal can be implemented as: if (x-count > 0) { next_count++; signal(x_sem); wait(next); next_count--; }
  • 102. Resuming Processes within a Monitor • If several processes queued on condition x, and x.signal() executed, which should be resumed? • FCFS frequently not adequate • conditional-wait construct of the form x.wait(c) – Where c is priority number – Process with lowest number (highest priority) is scheduled next
  • 103. A Monitor to Allocate Single Resource monitor ResourceAllocator { boolean busy; condition x; void acquire(int time) { if (busy) x.wait(time); busy = TRUE; } void release() { busy = FALSE; x.signal(); } initialization code() { busy = FALSE; } }
  • 104. A Monitor to Allocate Single Resource • Each process, when requesting for allocation of the resources, specifies maximum time it plans to use the resource. • A monitor allocates the resource to the process that has the shortest time allocation request. • A process that needs to access the resource in question i.e. R.acquire(t); … access the resource; … R.release() Where R is ResourseAllocator Operating System Concepts 104