Name of the Course Operating Systems
Course Code CS403PC
Year & Semester B.Tech II Year II Sem
Section CSE - C
Name of the Faculty Ms. Padmavati E Gundgurti/ Ms. A Kranthi/ Mr. C
Nagaraju
Lecture Hour | Date May 2021
Name of the Topic Deadlocks
Course Outcome(s) Assess the techniques of deadlock avoidance and
prevention
BVRIT HYDERABAD College of Engineering for Women
OPERATING SYSTEMS
By
Ms. Padmavati E Gundgurti
Ms. A Kranthi
Mr. C Nagaraju
Designation: Assistant Professor
Department of Computer Science & Engineering
BVRIT HYDERABAD College of Engineering for Women
Process management and synchronization
A cooperating process is one that can affect or be affected by other processes executing in the
system.
• A collection of co-operating sequential processes qare allowed to share the data concurrently.
• Concurrent access to shared data may result in data inconsistency
• Maintaining data consistency requires mechanisms to ensure the orderly execution of
cooperating processes
• Suppose if we want any changes that result from such activities not to interfere with one another.
• So processes should be coordinated among them is called “Process Synchronization” or Process
Coordination.
• Suppose that we wanted to provide a solution to the consumer-producer problem that fills all the
buffers. We can do so by having an integer count that keeps track of the number of full buffers.
Initially, count is set to 0. It is incremented by the producer after it produces a new buffer and is
decremented by the consumer after it consumes a buffer.
BVRIT HYDERABAD College of Engineering for Women
Producer – Consumer problem
while (true) { while (true) {
while (count == 0)
/* produce an item and put in ; // do nothing
nextProduced */ nextConsumed = buffer[out];
while (count == BUFFER_SIZE) out = (out + 1) % BUFFER_SIZE;
; // do nothing count--;
buffer [in] = nextProduced;
in = (in + 1) % BUFFER_SIZE; /* consume the item in nextConsumed
count++; }
Producer Consumer
BVRIT HYDERABAD College of Engineering for Women
Race Condition
⚫ The outcome of concurrent thread execution depends on the particular order in which the access takes
place = race condition.
⚫ count++ could be implemented as
register1 = count
register1 = register1 + 1
count = register1
⚫ count-- could be implemented as
register2 = count
register2 = register2 - 1
count = register2
⚫ Consider this execution interleaving with “count = 5” initially:
S0: producer execute register1 = count {register1 = 5}
S1: producer execute register1 = register1 + 1 {register1 = 6}
S2: consumer execute register2 = count {register2 = 5}
S3: consumer execute register2 = register2 - 1 {register2 = 4}
S4: producer execute count = register1 {count = 6 }
S5: consumer execute count = register2 {count = 4}
BVRIT HYDERABAD College of Engineering for Women
Race Condition cont.,
•Notice that we have arrived at the incorrect state "counter == 4", indicating that four buffers
are full when, in fact, five buffers are full.
•If we reversed the order of the statements at S4 and S5, we would arrive at the incorrect state
"counter == 6".
•We would arrive at this incorrect state because we allowed both processes to manipulate the
variable counter concurrently.
• A situation like this, where several processes access and manipulate the same data
concurrently and the outcome of the execution depends on the particular order in which the
access takes place, is called a race condition.
• To guard against the race condition above, we need to ensure that only one process at a time
can be manipulating the variable counter.
•To make such a guarantee, we require that the processes be synchronized in some way.
BVRIT HYDERABAD College of Engineering for Women
critical section Problem
•Consider a system is consisting of n processes(P0,P1…..Pn-1).
•Each process has a segment code , called a critical section , in
which the process may be changing common variables, updating a
table, writing a file and so on.
•The important feature of the system is that when one process is
executing in its critical section no other process is to be allowed to
execute in its critical section.
•That is no two processes are executing in their critical sections at
the same time.
•The critical-section problem is to design a protocol that the
processes can use to cooperate.
•Each process must request permission to enter its critical section.
•The section of code implementing this request is the entry section.
•The critical section may be followed by an exit section.
•The remaining code is the remainder section.
BVRIT HYDERABAD College of Engineering for Women
Solution to the Critical Section Problem
1. Mutual Exclusion - If process Pi is executing in its critical section, then no other processes can be
executing in their critical sections
2. Progress - If no process is executing in its critical section and there exist some processes that wish to
enter their critical section, then the selection of the processes that will enter the critical section next cannot
be postponed indefinitely
3. Bounded Waiting - A bound must exist on the number of times that other processes are allowed to enter
their critical sections after a process has made a request to enter its critical section and before that request is
granted
⚫Assume that each process executes at a nonzero speed
⚫No assumption concerning relative speed of the N processes
Two general approaches are used to handle critical sections in operating systems:
(1) preemptive kernels : A preemptive kernel allows a process to be preempted while it is running in
kernel mode.
(2) Nonpreemptive kernels: A nonpreemptive kernel does not allow a process running in kernel mode to
be preempted; a kernel-mode process will run until it exits kernel mode, blocks, or voluntarily yields
control of the CPU. Obviously, a nonpreemptive kernel is essentially free from race conditions on
kernel data structures, as only one process is active in the kernel at a time.
BVRIT HYDERABAD College of Engineering for Women
Peterson’s Solution
The classic software based solution to the critical section do {
problem is known as “Peterson’s solution” , which flag[i] = TRUE;
addresses the requirements of mutual execution , progress turn = j;
and bounded waiting. while (flag[j] && turn == j);
Two process solution critical section
The processes are numbered P0 , P1. flag[i] = FALSE;
The two processes share two variables: remainder section
int turn; } while (TRUE);
Boolean flag[2]
The variable turn indicates whose turn it is to enter the The structure of Pi in Peterson’s solution
critical section.
That is if turn==i, then process Pi is allowed to execute in
its critical section.
The flag array is used to indicate if a process is ready to
enter the critical section. flag[i] = true implies that process
Pi is ready!
BVRIT HYDERABAD College of Engineering for Women
Synchronization Hardware
•In Synchronization hardware, we explore several more solutions to the critical-section problem using techniques
ranging from hardware to software based APIs available to application programmers.
•These solutions are based on the premise of locking; however, the design of such locks can be quite sophisticated.
•These Hardware features can make any programming task easier and improve system efficiency.
•The critical-section problem could be solved simply in a uniprocessor environment.
•In this manner, we would be assuring that the current sequence of instructions would be allowed to execute in order
without preemption.
• No other instructions would be run, so no unexpected modifications could be made to the shared variable. This is the
approach taken by non-preemptive kernels.
•But unfortunately, this solution is not as feasible in a multiprocessor environment.
• Since the message is passed to all the processors, disabling interrupts on a multiprocessor can be time consuming.
Many systems provide hardware support for critical section code
Uniprocessors – could disable interrupts
Currently running code would execute without preemption
Generally too inefficient on multiprocessor systems
Operating systems using this not broadly scalable
Modern machines provide special atomic hardware instructions
Atomic = non-interruptable
Either test memory word and set value
Or swap contents of two memory words
BVRIT HYDERABAD College of Engineering for Women
TestAndSet Instruction
Mutual-exclusion implementation with TestAndSet ()
The definition of the TestAndSet () The essential characteristic is that this instruction is
boolean TestAndSet (boolean *target) executed atomically. So, if two TestAndSet
{ instructions are executed simultaneously (each on a
boolean rv = *target; different CPU), they will be executed sequentially in
*target = TRUE; some arbitrary order. Shared boolean variable lock.,
return rv: initialized to false.
} Solution:
do {
while ( TestAndSet (&lock ))
; // do nothing
// critical section
lock = FALSE;
// remainder section
} while (TRUE);
BVRIT HYDERABAD College of Engineering for Women
Solution to Critical-section Problem Using Locks
do {
acquire lock
critical section
release lock
remainder section
} while (TRUE);
BVRIT HYDERABAD College of Engineering for Women
Swap Instruction
•The Swap instruction, in contrast to the TestAndSet0 instruction, operates on the contents of two words; it
is executed atomically.
• mutual exclusion can be provided as follows if the machine supports the SwapO instruction.
•Here, a global Boolean variable lock is declared and is initialized to false.
• Additionally, each process has a local Boolean variable key. The structure of process P, is shown in figure
below.
Mutual-exclusion implementation with the Swap () instruction
The definition of the Swap () instruction
do {
void Swap (boolean *a, boolean *b) key = TRUE;
{ while ( key == TRUE)
boolean temp = *a; Swap (&lock, &key );
*a = *b;
*b = temp: // critical section
}
lock = FALSE;
// remainder
section
} while (TRUE);
BVRIT HYDERABAD College of Engineering for Women
Bounded-waiting & mutual exclusion with test and set()
Since these algorithms satisfy only the mutual-exclusion requirement, they do not satisfy the
bounded-waiting requirement. In below code, we present another algorithm using the
TestAndSet() instruction that satisfies all the critical-section requirements.
do {
waiting[i] = TRUE;
key = TRUE;
while (waiting[i] && key)
key = TestAndSet(&lock);
waiting[i] = FALSE;
boolean waiting[nJ ;
// critical section
boolean lock;
j = (i + 1) % n;
are common datastructures initialized
while ((j != i) && !waiting[j])
to false
j = (j + 1) % n;
if (j == i)
lock = FALSE;
else
waiting[j] = FALSE;
// remainder section
} while (TRUE);
BVRIT HYDERABAD College of Engineering for Women
Bounded-waiting & mutual exclusion with test and set() Cont.,
•The data structures are boolean waiting[n]; boolean lock;
•These data structures are initialized to false.
•To prove the point that the mutual exclusion requirement is met, we note that process P; can enter its critical section
only if either waiting[i] == false or key -- false.
•The value of key can become false only if the TestAndSet() is executed.
•The first process to execute the TestAndSet() will find key == false; all others must wait.
•The variable waiting[i] may become false only if another process leaves its critical section; only one waiting [i] is set to
false, maintaining the mutual-exclusion requirement.
•To prove the point that the progress requirement is met, we note that the arguments presented for mutual exclusion also
apply here, since a process exiting the critical section either set lock to false or sets waiting[j] to false.
•Both of them allow a process that is waiting to enter its critical section to proceed.
• To prove the point that the bounded-waiting requirement is met, we note that, when a process leaves its critical section,
•it scans the array waiting in the cyclic ordering (z' + 1, i + 2,..., n — 1, 0, ..., i — 1).
• It designates the first process in this ordering that is in the entry section (waiting [j] =- true) as the next one to enter the
critical section.
•Any process that waiting to enter its critical section will thus do so within n — 1 turns.
• Unfortunately for hardware designers, implementing atomic TestAndSet() instructions on multiprocessors is not a
trivial task.
BVRIT HYDERABAD College of Engineering for Women
Semaphores in Operating System
The various hardware-based solutions to the critical-section problem (using the TestAndSet 0 and Swap 0 instructions) are
complicated for application programmers to use. To overcome this difficulty, we can use a synchronization tool called a
semaphore.
A semaphore S is an integer variable that, apart from initialization, is accessed only through two standard atomic
operations: wait() and signal (). The wait () operation was originally termed P (from the Dutch proberen, "to test");
signal() was originally called V (from verhagen, "to increment").
•The definitions of wait and signal are as follows −
•Wait(): The wait operation decrements the value of its argument S, if it is positive.
• If S is negative or zero, then no operation is performed.
Wait(S)
{
While(S<=0)
S--;
}
Signal(): The signal operation increments the value of its argument S.
Signal(S)
{
S++;
}
BVRIT HYDERABAD College of Engineering for Women
Types of Semaphores
There are two main types of semaphores i.e. counting semaphores and binary semaphores.
•Counting Semaphores
•These are integer value semaphores and have an unrestricted value domain.
•These semaphores are used to coordinate the resource access, where the semaphore count is the
number of available resources.
•If the resources are added, semaphore count automatically incremented and if the resources are
removed, the count is decremented.
•Binary Semaphores
•The binary semaphores are like counting semaphores but their value is restricted to 0 and 1.
The wait operation only works when the semaphore is 1 and the signal operation succeeds when
semaphore is 0.
•It is sometimes easier to implement binary semaphores than counting semaphores.
BVRIT HYDERABAD College of Engineering for Women
Mutual Exclusion using semaphore
•Binary semaphore – integer value can range only between 0 and 1; can be simpler to implement and
also known as mutex locks
•Provides mutual exclusion
Semaphore mutex; // initialized to 1
do {
wait (mutex);
// Critical Section
signal (mutex);
// remainder section
} while (TRUE);
• The main disadvantage of the semaphore definition given here is that it requires busy waiting.
While a process is in its critical section, any other process that tries to enter its critical section
must loop continuously in the entry code.
• Busy waiting wastes CPU cycles that some other process might be able to use productively. This
type of semaphore is also called a spinlock because the process "spins" while waiting for the lock.
• To overcome the need for busy waiting, we can modify the definition of the wait () and signal ()
semaphore operations.
BVRIT HYDERABAD College of Engineering for Women
Semaphore Implementation with no Busy waiting
With each semaphore there is an associated waiting queue. Each entry in a waiting queue has two data items:
• value (of type integer)
• pointer to next record in the list
Two operations:
block – place the process invoking the operation on the appropriate waiting queue(suspends the process).
wakeup – remove one of processes in the waiting queue and place it in the ready queue(resumes the execution of the
process).
⚫ Implementation of wait:
wait(semaphore *S) {
S->value--;
if (S->value < 0) {
add this process to S->list;
block();
}
}
⚫ Implementation of signal:
signal(semaphore *S) {
S->value++;
if (S->value <= 0) {
remove a process P from S->list;
wakeup(P);
}
}
BVRIT HYDERABAD College of Engineering for Women
Classical Problems of Synchronization
Semaphore can be used in other synchronization problems besides
Mutual Exclusion.
•Bounded Buffer (Producer-Consumer) Problem
•Dining Philosophers Problem
•The Readers Writers Problem
BVRIT HYDERABAD College of Engineering for Women
Bounded buffer solution
Producer: Consumer:
do {
do { wait (full);
// produce an item in nextp wait (mutex);
// remove an item from
wait (empty); buffer to nextc
wait (mutex);
signal (mutex);
signal (empty);
// add the item to the
buffer
// consume the item in
nextc
signal (mutex);
signal (full); } while (TRUE);
} while (TRUE);
BVRIT HYDERABAD College of Engineering for Women
The Readers Writers problem
•In this problem there are some processes(called readers) that only read the shared data, and never change it,
and there are other processes(called writers) who may change the data in addition to reading, or instead of
reading it.
•There are various type of readers-writers problem, most centered on relative priorities of readers and writers.
•First readers-writers problem, requires that no reader will be kept waiting unless a writer has already
obtained permission to use the shared object. In other words, no reader should wait for other readers to finish
simply because a writer is waiting.
•The second readers-writers problem requires that, once a writer is ready, that writer performs its write as
soon as possible. In other words, if a writer is waiting to access the object, no new readers may start reading.
Problem – allow multiple readers to read at the same time. Only one single writer can access the shared data
at the same time.
Shared Data
Data set
Semaphore mutex initialized to 1
Semaphore wrt initialized to 1
Integer readcount initialized to 0
BVRIT HYDERABAD College of Engineering for Women
The Readers Writers solution
Reader:
Writer:
do {
wait (mutex) ;
do { readcount ++ ;
wait (wrt) ; if (readcount == 1)
wait (wrt) ;
signal (mutex)
// writing is performed
// reading is
performed
signal (wrt) ; wait (mutex) ;
} while (TRUE); readcount - - ;
if (readcount == 0)
signal (wrt) ;
signal (mutex) ;
} while (TRUE);
BVRIT HYDERABAD College of Engineering for Women
The Dining philosopher's problem
•The dining philosopher's problem involves the allocation of limited resources to a group of processes in a deadlock-
free and starvation-free manner.
•There are five philosophers sitting around a table, in which there are five chopsticks/forks kept beside them and a
bowl of rice in the centre, When a philosopher wants to eat, he uses two chopsticks - one from their left and one from
their right.
• When a philosopher wants to think, he keeps down both chopsticks at their original place.
• Shared data
– Bowl of rice (data set)
– Semaphore chopstick [5]
initialized to 1
BVRIT HYDERABAD College of Engineering for Women
The Dining philosopher's solution
• The structure of Philosopher i:
do {
wait ( chopstick[i] );
wait ( chopStick[ (i + 1) % 5] );
// eat
signal ( chopstick[i] );
signal (chopstick[ (i + 1) % 5] );
// think
} while (TRUE);
BVRIT HYDERABAD College of Engineering for Women
Advantages of Semaphores:
•Semaphores allow only one process into the critical section.
• They follow the mutual exclusion principle strictly and are much more efficient than some
other methods of synchronization.
•There is no resource wastage because of busy waiting in semaphores as processor time is not
wasted unnecessarily to check if a condition is fulfilled to allow a process to access the critical
section.
•Semaphores are implemented in the machine independent code of the microkernel.
• So they are machine independent.
BVRIT HYDERABAD College of Engineering for Women
Disadvantages of Semaphores:
•Semaphores are complicated so the wait and signal operations must be implemented in the
correct order to prevent deadlocks. The below situation leads to violation of Mutual Exclusion.
signal(mutex);
//critical section
wait(mutex);
• Improper usage of semaphores may lead to deadlock.
wait(mutex);
//critical section
wait(mutex);
•Semaphores may lead to a priority inversion where low priority processes may access the
critical section first and high priority processes later.
•Suppose that a process omits the wait (mutex), or the signal (mutex), or both. In this case, either
mutual exclusion is violated or a deadlock will occur.
To deal with such errors Monitors are developed.
BVRIT HYDERABAD College of Engineering for Women
Monitors
•Monitors are a synchronization construct that were created to overcome the problems
caused by semaphores such as timing errors.
•Monitors are abstract data types and contain shared data variables and procedures.
• Only one process may be active within the monitor at a time
monitor monitorName
{
data variables;
Procedure P1(....)
{ }
Procedure P2(....)
{ }
Procedure Pn(....)
{ }
Initialization Code(....)
{ }
}
Schematic view of a Monitor
BVRIT HYDERABAD College of Engineering for Women
Condition Variables
• condition x, y;
• Two operations on a condition
variable:
– x.wait () – a process that
invokes the operation is
suspended.
– x.signal () – resumes one
of processes (if any) that
invoked x.wait ()
Monitor with Condition Variables
BVRIT HYDERABAD College of Engineering for Women
Condition Variables Choices
If process P invokes x.signal (), with Q in x.wait () state, what should happen next?
If Q is resumed, then P must wait
Options include
Signal and wait – P waits until Q leaves monitor or waits for another condition
Signal and continue – Q waits until P leaves the monitor or waits for another
condition
Both have pros and cons – language implementer can decide
Monitors implemented in Concurrent Pascal compromise
P executing signal immediately leaves the monitor, Q is resumed
Implemented in other languages including Mesa, C#, Java
BVRIT HYDERABAD College of Engineering for Women
Solution to Dining Philosophers
monitor DP
{
enum { THINKING; HUNGRY, EATING) state [5] ;
condition self [5];
void pickup (int i) {
state[i] = HUNGRY;
test(i);
if (state[i] != EATING) self [i].wait;
}
void putdown (int i) {
state[i] = THINKING;
// test left and right neighbors
test((i + 4) % 5);
test((i + 1) % 5);
}
BVRIT HYDERABAD College of Engineering for Women
Solution to Dining Philosophers cont.,
void test (int i) {
if ( (state[(i + 4) % 5] != EATING) &&
(state[i] == HUNGRY) &&
(state[(i + 1) % 5] != EATING) ) {
state[i] = EATING ;
self[i].signal () ;
}
}
initialization_code() {
for (int i = 0; i < 5; i++)
state[i] = THINKING;
}
}
Each philosopher I invokes the operations pickup() and putdown() in the following sequence:
DiningPhilosophters.pickup (i);
EAT
DiningPhilosophers.putdown (i);
BVRIT HYDERABAD College of Engineering for Women